With the help of nltk.tokenize.SpaceTokenizer()
method, we are able to extract the tokens from string of words on the basis of space between them by using tokenize.SpaceTokenizer()
method.
Syntax :
tokenize.SpaceTokenizer()
Return : Return the tokens of words.
Example #1 :
In this example we can see that by using tokenize.SpaceTokenizer()
method, we are able to extract the tokens from stream to words having space between them.
# import SpaceTokenizer() method from nltk from nltk.tokenize import SpaceTokenizer
# Create a reference variable for Class SpaceTokenizer tk = SpaceTokenizer()
# Create a string input gfg = "Geeksfor Geeks.. .$$&* \nis\t for geeks"
# Use tokenize method geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘Geeksfor’, ‘Geeks..’, ‘.$$&*’, ‘\nis\t’, ‘for’, ‘geeks’]
Example #2 :
# import SpaceTokenizer() method from nltk from nltk.tokenize import SpaceTokenizer
# Create a reference variable for Class SpaceTokenizer tk = SpaceTokenizer()
# Create a string input gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
# Use tokenize method geek = tk.tokenize(gfg)
print (geek)
|
Output :
[‘The’, ‘price\t’, ‘of’, ‘burger’, ‘\nin’, ‘BurgerKing’, ‘is’, ‘Rs.36.\n’]