Open In App

Python NLTK | tokenize.WordPunctTokenizer()

Last Updated : 30 Sep, 2019
Improve
Improve
Like Article
Like
Save
Share
Report

With the help of nltk.tokenize.WordPunctTokenizer()() method, we are able to extract the tokens from string of words or sentences in the form of Alphabetic and Non-Alphabetic character by using tokenize.WordPunctTokenizer()() method.

Syntax : tokenize.WordPunctTokenizer()()
Return : Return the tokens from a string of alphabetic or non-alphabetic character.

Example #1 :
In this example we can see that by using tokenize.WordPunctTokenizer()() method, we are able to extract the tokens from stream of alphabetic or non-alphabetic character.




# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
     
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
     
# Create a string input
gfg = "GeeksforGeeks...$$&* \nis\t for geeks"
     
# Use tokenize method
geek = tk.tokenize(gfg)
     
print(geek)


Output :

[‘GeeksforGeeks’, ‘…$$&*’, ‘is’, ‘for’, ‘geeks’]

Example #2 :




# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
     
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
     
# Create a string input
gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
     
# Use tokenize method
geek = tk.tokenize(gfg)
     
print(geek)


Output :

[‘The’, ‘price’, ‘of’, ‘burger’, ‘in’, ‘BurgerKing’, ‘is’, ‘Rs’, ‘.’, ’36’, ‘.’]



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads