The following is a step by step guide to exploring various kinds of Lemmatization approaches in python along with a few examples and code implementation. It is highly recommended that you stick to the given flow unless you have an understanding of the topic, in which case you can look up any of the approaches given below.
What is Lemmatization?
In contrast to stemming, lemmatization is a lot more powerful. It looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words, aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
For clarity, look at the following examples given below:
Original Word ---> Root Word (lemma) Feature
meeting ---> meet (core-word extraction)
was ---> be (tense conversion to present tense)
mice ---> mouse (plural to singular)
TIP: Always convert your text to lowercase before performing any NLP task including lemmatizing.
Various Approaches to Lemmatization:
We will be going over 9 different approaches to perform Lemmatization along with multiple examples and code implementations.
- WordNet
- WordNet (with POS tag)
- TextBlob
- TextBlob (with POS tag)
- spaCy
- TreeTagger
- Pattern
- Gensim
- Stanford CoreNLP
1. Wordnet Lemmatizer
Wordnet is a publicly available lexical database of over 200 languages that provides semantic relationships between its words. It is one of the earliest and most commonly used lemmatizer technique.
- It is present in the nltk library in python.
- Wordnet links words into semantic relations. ( eg. synonyms )
- It groups synonyms in the form of synsets.
- synsets : a group of data elements that are semantically equivalent.
How to use:
- Download nltk package : In your anaconda prompt or terminal, type:
pip install nltk
- Download Wordnet from nltk : In your python console, do the following :
import nltk
nltk.download(‘wordnet’)
nltk.download(‘averaged_perceptron_tagger’)
Code:
Python3
import nltk
nltk.download( 'wordnet' )
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
list1 = [ 'kites' , 'babies' , 'dogs' , 'flying' , 'smiling' ,
'driving' , 'died' , 'tried' , 'feet' ]
for words in list1:
print (words + " ---> " + wnl.lemmatize(words))
|
Code:
Python3
string = 'the cat is sitting with the bats on the striped mat under many flying geese'
list2 = nltk.word_tokenize(string)
print (list2)
lemmatized_string = ' ' .join([wnl.lemmatize(words) for words in list2])
print (lemmatized_string)
|
2. Wordnet Lemmatizer (with POS tag)
In the above approach, we observed that Wordnet results were not up to the mark. Words like ‘sitting’, ‘flying’ etc remained the same after lemmatization. This is because these words are treated as a noun in the given sentence rather than a verb. To overcome come this, we use POS (Part of Speech) tags.
We add a tag with a particular word defining its type (verb, noun, adjective etc).
For Example,
Word + Type (POS tag) —> Lemmatized Word
driving + verb ‘v’ —> drive
dogs + noun ‘n’ —> dog
Code:
Python3
import nltk
from nltk.stem import WordNetLemmatizer
nltk.download( 'averaged_perceptron_tagger' )
from nltk.corpus import wordnet
lemmatizer = WordNetLemmatizer()
def pos_tagger(nltk_tag):
if nltk_tag.startswith( 'J' ):
return wordnet.ADJ
elif nltk_tag.startswith( 'V' ):
return wordnet.VERB
elif nltk_tag.startswith( 'N' ):
return wordnet.NOUN
elif nltk_tag.startswith( 'R' ):
return wordnet.ADV
else :
return None
sentence = 'the cat is sitting with the bats on the striped mat under many badly flying geese'
pos_tagged = nltk.pos_tag(nltk.word_tokenize(sentence))
print (pos_tagged)
wordnet_tagged = list ( map ( lambda x: (x[ 0 ], pos_tagger(x[ 1 ])), pos_tagged))
print (wordnet_tagged)
lemmatized_sentence = []
for word, tag in wordnet_tagged:
if tag is None :
lemmatized_sentence.append(word)
else :
lemmatized_sentence.append(lemmatizer.lemmatize(word, tag))
lemmatized_sentence = " " .join(lemmatized_sentence)
print (lemmatized_sentence)
|
3. TextBlob
TextBlob is a python library used for processing textual data. It provides a simple API to access its methods and perform basic NLP tasks.
Download TextBlob package : In your anaconda prompt or terminal, type:
pip install textblob
Code:
Python3
from textblob import TextBlob, Word
my_word = 'cats'
w = Word(my_word)
print (w.lemmatize())
sentence = 'the bats saw the cats with stripes hanging upside down by their feet.'
s = TextBlob(sentence)
lemmatized_sentence = " " .join([w.lemmatize() for w in s.words])
print (lemmatized_sentence)
|
4. TextBlob (with POS tag)
Same as in Wordnet approach without using appropriate POS tags, we observe the same limitations in this approach as well. So, we use one of the more powerful aspects of the TextBlob module the ‘Part of Speech’ tagging to overcome this problem.
Code:
Python3
from textblob import TextBlob
def pos_tagger(sentence):
sent = TextBlob(sentence)
tag_dict = { "J" : 'a' , "N" : 'n' , "V" : 'v' , "R" : 'r' }
words_tags = [(w, tag_dict.get(pos[ 0 ], 'n' )) for w, pos in sent.tags]
lemma_list = [wd.lemmatize(tag) for wd, tag in words_tags]
return lemma_list
sentence = "the bats saw the cats with stripes hanging upside down by their feet"
lemma_list = pos_tagger(sentence)
lemmatized_sentence = " " .join(lemma_list)
print (lemmatized_sentence)
lemmatized_sentence = " " .join([w.lemmatize() for w in t_blob.words])
print (lemmatized_sentence)
|
Here is a link for all the types of tag abbreviations with their meanings. (scroll down for the tags table)
5. spaCy
spaCy is an open-source python library that parses and “understands” large volumes of text. Separate models are available that cater to specific languages (English, French, German, etc.).
Download spaCy package :(a) Open anaconda prompt or terminal as administrator and run the command:
(b) Now, open anaconda prompt or terminal normally and run the command:
If successful, you should see a message like:
Linking successful
C:\Anaconda3\envs\spacyenv\lib\site-packages\en_core_web_sm -->
C:\Anaconda3\envs\spacyenv\lib\site-packages\spacy\data\en
You can now load the model via
Code:
Python3
import spacy
nlp = spacy.load( 'en_core_web_sm' )
doc = nlp(u 'the bats saw the cats with best stripes hanging upside down by their feet' )
tokens = []
for token in doc:
tokens.append(token)
print (tokens)
lemmatized_sentence = " " .join([token.lemma_ for token in doc])
print (lemmatized_sentence)
|
In the above code, we observed that this approach was more powerful than our previous approaches as :
- Even Pro-nouns were detected. ( identified by -PRON-)
- Even best was changed to good.
6. TreeTagger
The TreeTagger is a tool for annotating text with part-of-speech and lemma information. The TreeTagger has been successfully used to tag over 25 languages and is adaptable to other languages if a manually tagged training corpus is available.
Word |
POS |
Lemma |
the |
DT |
the |
TreeTagger |
NP |
TreeTagger |
is |
VBZ |
be |
easy |
JJ |
easy |
to |
TO |
to |
use |
VB |
use |
. |
SENT |
. |
How to use:
1. Download TreeTagger package : In your anaconda prompt or terminal, type:
2. Download TreeTagger Software: Click on TreeTagger and download the software as per your OS.
(Steps of installation given on website)
Code:
Python3
import pandas as pd
import treetaggerwrapper as tt
t_tagger = tt.TreeTagger(TAGLANG = 'en' , TAGDIR = 'C:\Windows\TreeTagger' )
pos_tags = t_tagger.tag_text( "the bats saw the cats with best stripes hanging upside down by their feet" )
original = []
lemmas = []
tags = []
for t in pos_tags:
original.append(t.split( '\t' )[ 0 ])
tags.append(t.split( '\t' )[ 1 ])
lemmas.append(t.split( '\t' )[ - 1 ])
Results = pd.DataFrame({ 'Original' : original, 'Lemma' : lemmas, 'Tags' : tags})
print (Results)
|
7. Pattern
Pattern is a Python package commonly used for web mining, natural language processing, machine learning, and network analysis. It has many useful NLP capabilities. It also contains a special feature which we will be discussing below.
How to use:
Download Pattern package: In your anaconda prompt or terminal, type:
Code:
Python3
import pattern
from pattern.en import lemma, lexeme
from pattern.en import parse
sentence = "the bats saw the cats with best stripes hanging upside down by their feet"
lemmatized_sentence = " " .join([lemma(word) for word in sentence.split()])
print (lemmatized_sentence)
all_lemmas_for_each_word = [lexeme(wd) for wd in sentence.split()]
print (all_lemmas_for_each_word)
|
NOTE : if the above code raises an error saying ‘generator raised StopIteration’. Just run it again. It will work after 3-4 tries.
8. Gensim
Gensim is designed to handle large text collections using data streaming. Its lemmatization facilities are based on the pattern package we installed above.
- gensim.utils.lemmatize() function can be used for performing Lemmatization. This method comes under the utils module in python.
- We can use this lemmatizer from pattern to extract UTF8-encoded tokens in their base form=lemma.
- Only considers nouns, verbs, adjectives, and adverbs by default (all other lemmas are discarded).
- For example
Word ---> Lemmatized Word
are/is/being ---> be
saw ---> see
How to use:
1. Download Pattern package: In your anaconda prompt or terminal, type:
2. Download Gensim package: Open your anaconda prompt or terminal as administrator and type:
OR
Code:
Python3
from gensim.utils import lemmatize
sentence = "the bats saw the cats with best stripes hanging upside down by their feet"
lemmatized_sentence = [word.decode( 'utf-8' ).split( '.' )[ 0 ] for word in lemmatize(sentence)]
print (lemmatized_sentence)
|
NOTE : if the above code raises an error saying ‘generator raised StopIteration‘. Just run it again. It will work after 3-4 tries.
In the above code as you may have already noticed, the gensim lemmatizer ignore the words like ‘the’, ‘with’, ‘by’ as they did not fall into the 4 lemma categories mentioned above. (noun/verb/adjective/adverb)
9. Stanford CoreNLP
CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, sentiment, quote attributions, and relations.
- CoreNLP is your one stop shop for natural language processing in Java!
- CoreNLP currently supports 6 languages, including Arabic, Chinese, English, French, German, and Spanish.
How to use:
1. Get JAVA 8 : Download Java 8 (as per your OS) and install it.
2. Get Stanford_coreNLP package :
2.1) Download Stanford_CoreNLP and unzip it.
2.2) Open terminal
(a) go to the directory where you extracted the above file by doing
cd C:\Users\...\stanford-corenlp-4.1.0 on terminal
(b) then, start your Stanford CoreNLP server by executing the following command on terminal:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -annotators "tokenize, ssplit, pos, lemma, parse, sentiment" -port 9000 -timeout 30000
**(leave your terminal open as long as you use this lemmatizer)**
3. Download Stanford CoreNLP package: Open your anaconda prompt or terminal, type:
Code:
Python3
from stanfordcorenlp import StanfordCoreNLP
import json
props = { 'annotators' : 'pos, lemma' , 'pipelineLanguage' : 'en' , 'outputFormat' : 'json' }
sentence = "the bats saw the cats with best stripes hanging upside down by their feet"
parsed_str = nlp.annotate(sentence, properties = props)
print (parsed_str)
|
Code:
Python3
lemma_list = []
for item in parsed_dict[ 'sentences' ][ 0 ][ 'tokens' ]:
for key, value in item.items():
if key = = 'lemma' :
lemma_list.append(value)
print (lemma_list)
lemmatized_sentence = " " .join(lemma_list)
print (lemmatized_sentence)
|
Conclusion:
So these are the various Lemmatization approaches that you can refer while working on an NLP project. The selection of the Lemmatization approach is solely dependent upon project requirements. Each approach has its set of pros and cons. Lemmatization is mandatory for critical projects where sentence structure matter like language applications etc.
Last Updated :
07 Nov, 2022
Like Article
Save Article
Share your thoughts in the comments
Please Login to comment...