Since there is no accessable API to the EUROSTAT data, each query should be created manually I found this table of contents And I want to extract it to a searchable JSON. In the file there are section titles, and each leaf has 3 links, But how can I connect links and titles and sections into a JSON?
I have this basic code:
PDFFile = open("table_of_contents_en.pdf",'rb')
PDF = PyPDF2.PdfFileReader(PDFFile)
pages = PDF.getNumPages()
key = '/Annots'
uri = '/URI'
ank = '/A'
for page in range(1,2):
print("Current Page: {}".format(page))
pageSliced = PDF.getPage(page)
pageObject = pageSliced.getObject()
if key in pageObject.keys():
ann = pageObject[key]
for a in ann:
u = a.getObject()
if uri in u[ank].keys():
print(u[ank][uri])
And this for text:
pdfFileObj = open('table_of_contents_en.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
print(pdfReader.numPages)
pageObj = pdfReader.getPage(0)
print(pageObj.extractText())
pdfFileObj.close()
and this for download zips:
for page in range(1,2):
print("Current Page: {}".format(page))
pageSliced = PDF.getPage(page)
pageObject = pageSliced.getObject()
if key in pageObject.keys():
ann = pageObject[key]
for a in ann:
u = a.getObject()
if uri in u[ank].keys():
print(u[ank][uri])
if str(u[ank].keys()).find(".tsv.gz") != -1 :
url = str(u[ank].keys())
r = requests.get(url, allow_redirects=True)
print(str(str(str(u[ank].keys()).split("/")[-1]).split(".")[0])
open(str(str(str(u[ank].keys()).split("/")[-1]).split(".")[0]), 'wb').write(r.content)
But how can I do this at the same time correctly, in some structured data?