I am having a problem downloading a file from a specific Korean URL. When I googled how to download files through URL, it recommended many solutions such as using urlretrieve, urlopen, wget. However, whenever I try, it saves a 0 byte PDF file and it does not return any error message.
So I tired using other program such as Postman or J2downloader and they saved pdf.do
with 0 byte. I know .do
could be opened with Acrobat Reader but the size tells me that it was not able to download the contents.
The URL of the site is http://dart.fss.or.kr/pdf/download/pdf.do?rcp_no=20210218000576&dcm_no=7808922. If I open it through a website, it downloads correctly.
Now I am not sure whether it's my code problem or the site mechanism is different. If it's the websites mechanism, could you tell me how to make it work on using Python?
Code that I tried
final_url = http://dart.fss.or.kr/pdf/download/pdf.do?rcp_no=20210218000576&dcm_no=7808922
1.
urlretrieve(final_url, "./down2.pdf")
2.
with open("down.pdf",'wb') as file:
response = requests.get(final_url, allow_redirects=True)
print(response.content)
file.write(response.content)
3.
mem = urlopen(final_url).read()
with open("down.pdf",'wb') as file:
file.write(mem)
file.close()
4.
wget.download(final_url, "my download folder")