Home » Questions » Computers [ Ask a new question ]

How to extract text with OCR from a PDF on Linux?

How to extract text with OCR from a PDF on Linux?

How do I extract text from a PDF that wasn't built with an index? It's all text, but I can't search or select anything. I'm running Kubuntu, and Okular doesn't have this feature.

Asked by: Guest | Views: 381
Total answers/comments: 5
bert [Entry]

"I have had success with the BSD-licensed Linux port of Cuneiform OCR system.

No binary packages seem to be available, so you need to build it from source. Be sure to have the ImageMagick C++ libraries installed to have support for essentially any input image format (otherwise it will only accept BMP).

While it appears to be essentially undocumented apart from a brief README file, I've found the OCR results quite good. The nice thing about it is that it can output position information for the OCR text in hOCR format, so that it becomes possible to put the text back in in the correct position in a hidden layer of a PDF file. This way you can create ""searchable"" PDFs from which you can copy text.

I have used hocr2pdf to recreate PDFs out of the original image-only PDFs and OCR results. Sadly, the program does not appear to support creating multi-page PDFs, so you might have to create a script to handle them:

#!/bin/bash
# Run OCR on a multi-page PDF file and create a new pdf with the
# extracted text in hidden layer. Requires cuneiform, hocr2pdf, gs.
# Usage: ./dwim.sh input.pdf output.pdf

set -e

input=""$1""
output=""$2""

tmpdir=""$(mktemp -d)""

# extract images of the pages (note: resolution hard-coded)
gs -SDEVICE=tiffg4 -r300x300 -sOutputFile=""$tmpdir/page-%04d.tiff"" -dNOPAUSE -dBATCH -- ""$input""

# OCR each page individually and convert into PDF
for page in ""$tmpdir""/page-*.tiff
do
base=""${page%.tiff}""
cuneiform -f hocr -o ""$base.html"" ""$page""
hocr2pdf -i ""$page"" -o ""$base.pdf"" < ""$base.html""
done

# combine the pages into one PDF
gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=""$output"" ""$tmpdir""/page-*.pdf

rm -rf -- ""$tmpdir""

Please note that the above script is very rudimentary. For example, it does not retain any PDF metadata."
bert [Entry]

"Google docs will now use OCR to convert your uploaded image/pdf documents to text. I have had good success with it.

They are using the OCR system that is used for the gigantic Google Books project.

However, it must be noted that only PDFs to a size of 2 MB will be accepted for processing.

Update
1. To try it out, upload a <2MB pdf to google docs from a web browser.
2. Right click on the uploaded document and click ""Open with Google Docs"".
...Google Docs will convert to text and output to a new file with same name but Google Docs type in same folder."
bert [Entry]

"PDFBeads works well for me. This thread “Convert Scanned Images to a Single PDF File” got me up and running. For a b&w book scan, you need to:

Create an image for every page of the PDF; either of the gs examples above should work
Generate hOCR output for each page; I used tesseract (but note that Cuneiform seems to work better).
Move the images and the hOCR files to a new folder; the filenames must correspond, so file001.tif needs file001.html, file002.tif file002.html, etc.
In the new folder, run

pdfbeads * > ../Output.pdf

This will put the collated, OCR'd PDF in the parent directory."
"PDFBeads works well for me. This thread “Convert Scanned Images to a Single PDF File” got me up and running. For a b&w book scan, you need to:

Create an image for every page of the PDF; either of the gs examples above should work
Generate hOCR output for each page; I used tesseract (but note that Cuneiform seems to work better).
Move the images and the hOCR files to a new folder; the filenames must correspond, so file001.tif needs file001.html, file002.tif file002.html, etc.
In the new folder, run

pdfbeads * > ../Output.pdf

This will put the collated, OCR'd PDF in the parent directory."
bert [Entry]

"another script using tesseract :

#!/bin/bash
# Run OCR on a multi-page PDF file and create a txt with the
# extracted text in hidden layer. Requires tesseract, gs.
# Usage: ./pdf2ocr.sh input.pdf output.txt

set -e

input=""$1""
output=""$2""

tmpdir=""$(mktemp -d)""

# extract images of the pages (note: resolution hard-coded)
gs -SDEVICE=tiff24nc -r300x300 -sOutputFile=""$tmpdir/page-%04d.tiff"" -dNOPAUSE -dBATCH -- ""$input""

# OCR each page individually and convert into PDF
for page in ""$tmpdir""/page-*.tiff
do
base=""${page%.tiff}""
tesseract ""$base.tiff"" $base
done

# combine the pages into one txt
cat ""$tmpdir""/page-*.txt > $output

rm -rf -- ""$tmpdir"""
bert [Entry]

"Asprise OCR Library works on most versions of Linux. It can take PDF input and output as search PDF.

It's a commercial package. Download a free copy of Asprise OCR SDK for Linux here and run it this way:

aocr.sh input.pdf pdf

Note: the standalone 'pdf' specifies the output format.

Disclaimer: I am an employee of the company producing above product."