Python NLTK | nltk.tokenize.TabTokenizer()
With the help of nltk.tokenize.TabTokenizer()
method, we are able to extract the tokens from string of words on the basis of tabs between them by using tokenize.TabTokenizer()
method.
Syntax : tokenize.TabTokenizer()
Return : Return the tokens of words.
Example #1 :
In this example we can see that by using tokenize.TabTokenizer()
method, we are able to extract the tokens from stream to words having tabs between them.
from nltk.tokenize import TabTokenizer
tk = TabTokenizer()
gfg = "Geeksfor\tGeeks..\t.$$&* \nis\t for geeks"
geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘Geeksfor’, ‘Geeks..’, ‘.$$&* \nis’, ‘ for geeks’]
Example #2 :
from nltk.tokenize import TabTokenizer
tk = TabTokenizer()
gfg = "The price\t of burger \tin BurgerKing is Rs.36.\n"
geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘The price’, ‘ of burger ‘, ‘in BurgerKing is Rs.36.\n’]
Last Updated :
07 Jun, 2019
Like Article
Save Article
Share your thoughts in the comments
Please Login to comment...