0

I would like to automate in SWI Prolog the manual download of a CSV file delivered from this page on the Euronext web site. Manually, in this example, it can be done clicking on the blue arrow on top of the date/time column. For sure, there are solutions in other languages but i want to stay within a SWI Prolog solution ... Any idea on how to do it ?

The funny stuff with financial data is that there are many pages whose data and figures can then be easily grabbed with DOM + Xpath. And my question here is on how to automate the picking of the referential too. The same process applies on many other sites / subjects.

jkiiski
  • 8,206
  • 2
  • 28
  • 44
Wisermans
  • 76
  • 1
  • 4
  • Grabbing dynamic data from rendered websites is difficult (it seems that the website uses dynamic tables and grabs its data from somewhere else, which is difficult to see). Euronext offers an API (https://www.euronext.com/en/data/web-services) that may be of interest to you. Maybe you could use Firefox to dump the DOM (https://stackoverflow.com/questions/4170331/how-can-i-dump-the-entire-web-dom-in-its-current-state-in-firefox), and use Prolog's SGML/XML parser? – tphilipp Jun 20 '21 at 17:27
  • The best solution I saw till now seems to be using [pupeeter](https://github.com/puppeteer/puppeteer/) may it be to get files or a DOM after the Ajax code has been executed generating the displayed page to be parsed using the [Xpath library](https://www.swi-prolog.org/pldoc/man?section=xpath). Moreover Euronext being just one example more globally the idea is not to spend time asking each website with public data, especially when it is just to play with data and not for a professional use. – Wisermans Jun 21 '21 at 08:58

0 Answers0