Document similarity, as the name suggests determines how similar are the two given documents. By “documents”, we mean a collection of strings. For example, an essay or a .txt file. Many organizations use this principle of document similarity to check plagiarism. It is also used by many exams conducting institutions to check if a student cheated from the other. Therefore, it is very important as well as interesting to know how all of this works.
Document similarity is calculated by calculating document distance. Document distance is a concept where words(documents) are treated as vectors and is calculated as the angle between two given document vectors. Document vectors are the frequency of occurrences of words in a given document. Let’s see an example:
Say that we are given two documents D1 and D2 as:
D1: “This is a geek”
D2: “This was a geek thing”
The similar words in both these documents then become:
"This a geek"
If we make a 3-D representation of this as vectors by taking D1, D2 and similar words in 3 axis geometry, then we get:
Now if we take dot product of D1 and D2,
D1.D2 = "This"."This"+"is"."was"+"a"."a"+"geek"."geek"+"thing".0
D1.D2 = 1+0+1+1+0
D1.D2 = 3
Now that we know how to calculate the dot product of these documents, we can now calculate the angle between the document vectors:
cos d = D1.D2/|D1||D2|
Here d is the document distance. It’s value ranges from 0 degree to 90 degrees. Where 0 degree means the two documents are exactly identical and 90 degrees indicate that the two documents are very different.
Now that we know about document similarity and document distance, let’s look at a Python program to calculate the same:
Document similarity program :
Our algorithm to confirm document similarity will consist of three fundamental steps:
- Split the documents in words.
- Compute the word frequencies.
- Calculate the dot product of the document vectors.
For the first step, we will first use the
.read() method to open and read the content of the files. As we read the contents, we will split them into a list. Next, we will calculate the word frequency list of the read in the file. Therefore, the occurrence of each word is counted and the list is sorted alphabetically.
Now that we have the word list, we will now calculate the frequency of occurrences of the words.
Lastly, we will calculate the dot product to give the document distance.
That’s all! Time to see the document similarity function:
Here is the full sourcecode.
File GFG.txt : 15 lines, 4 words, 4 distinct words File file.txt : 22 lines, 5 words, 5 distinct words The distance between the documents is: 0.835482 (radians)
- Python | Measure similarity between two sentences using cosine similarity
- Python | Similarity metrics of strings
- Python | Word Similarity using spaCy
- Python | Percentage similarity of lists
- Measure similarity between images using Python-OpenCV
- Python | Test list element similarity
- Python | Document field detection using Template Matching
- NLP | WuPalmer - WordNet Similarity
- NLP | Leacock Chordorow (LCH) and Path similarity for Synset
- How do Document Databases Work?
- Python - Read blob object in python using wand library
- Reading Python File-Like Objects from C | Python
- Python | PRAW - Python Reddit API Wrapper
- twitter-text-python (ttp) module - Python
- Python | Merge Python key values to list
- Python | Index of Non-Zero elements in Python list
- Important differences between Python 2.x and Python 3.x with examples
- MySQL-Connector-Python module in Python
- Python | Convert list to Python array
- Python | Sort Python Dictionaries by Key or Value
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.