Python | Reading contents of PDF using OCR (Optical Character Recognition)
Python is widely used for analyzing the data but the data need not be in the required format always. In such cases, we convert that format (like PDF or JPG etc.) to the text format, in order to analyze the data in better way. Python offers many libraries to do this task.
There are several ways of doing this, including using libraries like PyPDF2 in Python. The major disadvantage of using these libraries is the encoding scheme. PDF documents can come in a variety of encodings including UTF-8, ASCII, Unicode, etc. So, converting the PDF to text might result in the loss of data due to the encoding scheme.
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course
Let’s see how to read all the contents of a PDF file and store it in a text document using OCR.
Firstly, we need to convert the pages of the PDF to images and then, use OCR (Optical Character Recognition) to read the content from the image and store it in a text file.
pip3 install PIL pip3 install pytesseract pip3 install pdf2image sudo apt-get install tesseract-ocr
There are two parts to the program.
Part #1 deals with converting the PDF into image files. Each page of the PDF is stored as an image file. The names of the images stored are:
PDF page 1 -> page_1.jpg
PDF page 2 -> page_2.jpg
PDF page 3 -> page_3.jpg
PDF page n -> page_n.jpg
Part #2 deals with recognizing text from the image files and storing it into a text file. Here, we process the images and convert it into text. Once we have the text as a string variable, we can do any processing on the text. For example, in many PDFs, when a line is completed, but a particular word cannot be written entirely in the same line, a hyphen (‘-‘) is added, and the word is continued on the next line. For example –
This is some sample text but this parti-
cular word could not be written in the same line.
Now for such words, a fundamental pre-processing is done to convert the hyphen and the new line into a full word. After all the pre-processing is done, this text is stored in a separate text file.
To get the input PDF files used in the code, click d.pdf
Below is the implementation:
Input PDF file:
Output Text file:
As we see, the pages of the PDF were converted to images. Then the images were read, and the content was written into a text file.
Advantages of this method include:
- Avoiding text-based conversion because of encoding scheme resulting in loss of data.
- Even handwritten content in PDF can be recognized due to the usage of OCR.
- Recognizing only particular pages of the PDF is also possible.
- Getting the text as a variable so that any amount of required pre-processing can be done.
Disadvantages of this method include:
- Disk storage is used to store the images in the local system. Although these images are tiny in size.
- Using OCR cannot guarantee 100% accuracy. Given a computer typed PDF document results in very high accuracy.
- Handwritten PDFs are still recognized, but the accuracy depends on various factors like handwriting, page color, etc.