With the help of nltk.tokenize.TabTokenizer()
method, we are able to extract the tokens from string of words on the basis of tabs between them by using tokenize.TabTokenizer()
method.
Syntax :
tokenize.TabTokenizer()
Return : Return the tokens of words.
Example #1 :
In this example we can see that by using tokenize.TabTokenizer()
method, we are able to extract the tokens from stream to words having tabs between them.
# import TabTokenizer() method from nltk from nltk.tokenize import TabTokenizer
# Create a reference variable for Class TabTokenizer tk = TabTokenizer()
# Create a string input gfg = "Geeksfor\tGeeks..\t.$$&* \nis\t for geeks"
# Use tokenize method geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘Geeksfor’, ‘Geeks..’, ‘.$$&* \nis’, ‘ for geeks’]
Example #2 :
# import TabTokenizer() method from nltk from nltk.tokenize import TabTokenizer
# Create a reference variable for Class TabTokenizer tk = TabTokenizer()
# Create a string input gfg = "The price\t of burger \tin BurgerKing is Rs.36.\n"
# Use tokenize method geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘The price’, ‘ of burger ‘, ‘in BurgerKing is Rs.36.\n’]
Article Tags :