I am new to web scraping. I would like to pull out data from this website: https://bpstat.bportugal.pt/dados/explorer
I have managed to get a response using the GET() function (even though not positive every time I run my code) using httr package.
library(httr)
URL <- "https://bpstat.bportugal.pt/dados/explorer"
r <- GET(URL)
r
Response [https://bpstat.bportugal.pt/dados/explorer]
Date: 2020-04-09 22:25
Status: 200
Content-Type: text/html; charset=utf-8
Size: 3.36 kB
I would like to send a request with these info that I would provide manually:
Accept the cookies on the first page
In the top right corner, select EN for English
Filter by domains – External statistics – Balance of payments
External operations - Balance of payments – current and capital accounts – current account – Goods and services account (highlight the following selection) :
Goods account; Services account; Manufacturing services on physical inputs; Maintenance and repair services; Transport services; Travel; Construction services; Insurance and pension services; Financial services; Charges for the use of intellectual property; Telecommunication, computer & information services; Other services provided by companies; Personal, cultural and recreational services; Government goods and services
Counterparty territory: All countries
Data type: Credit; Debit
Periodicity: Monthly
Unit of Measure: Millions of Euros
Select all series (click on them so they are highlighted in dark blue. At the top of the page click on the "Selected members" and then "go to associated series")
Go to Associated Series (increase number to be viewed on page at bottom of the screen. Increase from 10 to 50)
Manually tick all boxes except for "seasonally adjusted"
Go to "Selection list" Select "See in Table"
Download Excel three vertical dots at top ("visible data only")
I have seen a couple of examples like: - Send a POST request using httr R package but I don't know what inputs I need to provide...