I know this question has been asked multiple times, but none of the solutions actually worked so far.
I would like to pull some files to a web tool based on an URL.
This seems to be an FTP share but using
import ftplib
url = 'ftp://ftp.ebi.ac.uk/pub/databases/metabolights/studies/public/MTBLS1167'
ftp = ftplib.FTP(url)
6 ftp = ftplib.FTP(url) gaierror: [Errno -2] Name or service not known
It is easy to download single files with wget
:
wget.download(url+'/'+filename, out=ms_dir)
However, the python implementation of wget
does not have all features of the Linux tool implemented. So, something like wget.download(url+'/*.*', out=ms_dir)
does not work.
Therefore, I need to pull the list of files that I want to download first and download the files one by one. I tried beautifulsoup, requests, urllib. But all the solutions seem over-complicated for a problem that was probably solved a million times ten years ago, or don't work at all.
However, e.g.
import requests
response = requests.get(url, params=params)
InvalidSchema: No connection adapters were found for...
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', url)
URLSchemeUnknown: Not supported URL scheme ftp
And so on. I am not sure what I am doing wrong here.