13

I have about 1'500 PDFs consisting of only 1 page each, and exhibiting the same structure (see http://files.newsnetz.ch/extern/interactive/downloads/BAG_15m_kzh_2012_de.pdf for an example).

What I am looking for is a way to iterate over all these files (locally, if possible) and extract the actual contents of the table (as CSV, stored into a SQLite DB, whatever).

I would love to do this in Node.js, but couldn't find any suitable libraries for parsing such stuff. Do you know of any?

If not possible in Node.js, I could also code it in Python, if there are better methods available.

Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
grssnbchr
  • 2,877
  • 7
  • 37
  • 71

1 Answers1

20

I didn't know this before, but less has this magical ability to read pdf files. I was able to extract the table data from your example pdf with this script:

import subprocess
import re

output = subprocess.check_output(["less","BAG_15m_kzh_2012_de.pdf"])

re_data_prefix = re.compile("^[0-9]+[.].*$")
re_data_fields = re.compile("(([^ ]+[ ]?)+)")
for line in output.splitlines():
    if re_data_prefix.match(line):
        print [l[0].strip() for l in re_data_fields.findall(line)]
Andrew Johnson
  • 3,078
  • 1
  • 18
  • 24
  • 5
    I did a little write up of the solution I finally came up with: http://timogrossenbacher.ch/2014/11/parsing-thousands-of-pdfs-with-javascript/ – grssnbchr Nov 29 '14 at 11:00