Python NLTK | nltk.tokenize.TabTokenizer()

With the help of nltk.tokenize.TabTokenizer() method, we are able to extract the tokens from string of words on the basis of tabs between them by using tokenize.TabTokenizer() method.

Syntax : tokenize.TabTokenizer()
Return : Return the tokens of words.

Example #1 :
In this example we can see that by using tokenize.TabTokenizer() method, we are able to extract the tokens from stream to words having tabs between them.

filter_none

edit
close

play_arrow

link
brightness_4
code

# import TabTokenizer() method from nltk
from nltk.tokenize import TabTokenizer
     
# Create a reference variable for Class TabTokenizer
tk = TabTokenizer()
     
# Create a string input
gfg = "Geeksfor\tGeeks..\t.$$&* \nis\t for geeks"
     
# Use tokenize method
geek = tk.tokenize(gfg)
     
print(geek)

chevron_right


Output :

[‘Geeksfor’, ‘Geeks..’, ‘.$$&* \nis’, ‘ for geeks’]

Example #2 :

filter_none

edit
close

play_arrow

link
brightness_4
code

# import TabTokenizer() method from nltk
from nltk.tokenize import TabTokenizer
     
# Create a reference variable for Class TabTokenizer
tk = TabTokenizer()
     
# Create a string input
gfg = "The price\t of burger \tin BurgerKing is Rs.36.\n"
     
# Use tokenize method
geek = tk.tokenize(gfg)
     
print(geek)

chevron_right


Output :

[‘The price’, ‘ of burger ‘, ‘in BurgerKing is Rs.36.\n’]



My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.




Article Tags :

Be the First to upvote.


Please write to us at contribute@geeksforgeeks.org to report any issue with the above content.