I don't know if I am using the correct terminology for asking this question, so please let me know if I need to correct the post.
I want to write a Python code to download a file from urls automatically. However, the ultimate location of each file is not known and the urls seems to be written in ASP and point to another page that has an additional JavaScript link. For example, this link redirects to a page that has a "Click to Download" button with the following link javascript:__doPostBack('linkButton','')
I am wondering if it is even possible to do this in Python automatically as it is way over my head with all the additional hassle it seems to require. I tried the normal url retrieval methods such as those discussed in this post and this post with no success. For example, running simple codes such as the following only downloads an empty file which is not surprising, given the fact that an additional click is required to get to the actual file:
import urllib.request
urllib.request.urlretrieve(url,fileName)
or
import requests as rq
r = rq.get('http://somewebsite.com/download?f=someStrings', allow_redirects=True)
open('filename.extension', 'wb').write(r.content)