Open In App
Related Articles

Python NLTK | tokenize.regexp()

Improve Article
Improve
Save Article
Save
Like Article
Like

With the help of NLTK tokenize.regexp() module, we are able to extract the tokens from string by using regular expression with RegexpTokenizer() method.

Syntax : tokenize.RegexpTokenizer()
Return : Return array of tokens using regular expression

Example #1 :
In this example we are using RegexpTokenizer() method to extract the stream of tokens with the help of regular expressions.




# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
    
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
    
# Create a string input
gfg = "I love Python"
    
# Use tokenize method
geek = tk.tokenize(gfg)
    
print(geek)


Output :

[‘I’, ‘love’, ‘Python’]

Example #2 :




# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
    
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
    
# Create a string input
gfg = "Geeks for Geeks"
    
# Use tokenize method
geek = tk.tokenize(gfg)
    
print(geek)


Output :

[‘Geeks’, ‘for’, ‘Geeks’]

Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out - check it out now!

Last Updated : 07 Jun, 2019
Like Article
Save Article
Previous
Next
Similar Reads
Complete Tutorials