55

I want to download the file that is viewable at this address to a linux remote:

https://drive.google.com/file/d/0Bz7KyqmuGsilT0J5dmRCM0ROVHc/view?usp=sharing

I'm hoping I can do this with wget.

I tried

wget https://drive.google.com/file/d/0Bz7KyqmuGsilT0J5dmRCM0ROVHc/vgg16_weights.h5

and the response was a 404.

Is it possible to wget a google drive file? If so, what is the path to provide? If not, are there any alternatives (bash or other) so I can avoid downloading the file to my local and transferring it to the remote?

Kara
  • 6,115
  • 16
  • 50
  • 57
Eugene Brown
  • 4,032
  • 6
  • 33
  • 47
  • Read this early answer http://stackoverflow.com/a/25033499/2666859 – Serenity May 26 '16 at 07:12
  • For the information of others, you can see my answer here using CURL (Updated March 2018): https://stackoverflow.com/a/49444877/4043524 – Amit Chahar Mar 27 '18 at 13:25
  • Possible duplicate of [wget/curl large file from google drive](https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive) – craq Sep 23 '19 at 22:45

7 Answers7

82

Insert your file ID into this URL (https://drive.google.com/uc?export=download&id=), then surround the URL with quotes so that Bash doesn't misinterpret the &, like so:

wget "https://drive.google.com/uc?export=download&id=0Bz7KyqmuGsilT0J5dmRCM0ROVHc"

Reference here.


When downloading big files, Google Drive adds a security warning that breaks the script above. In that case, you can download the file using:

wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt

(Script taken from here)

GPhilo
  • 18,519
  • 9
  • 63
  • 89
  • 39
    Google Drive adds a security warning if the file is large. Then the given solution does not work anymore. Does someone has a solution for this? – pltrdy Jan 10 '17 at 14:25
  • 3
    For those using `curl`, the above also works, _e.g._: `curl -L -o file.out 'https://drive.google.com/uc?export=download&id=0Bz7KyqmuGsilT0J5dmRCM0ROVHc'`. Note that the `-L` command-line option is needed for following Google's redirects. – Castaglia Feb 27 '17 at 15:39
  • 4
    Not working even for small files, anymore. gives me a 404 – Irtaza Apr 17 '17 at 12:33
  • @yazfield Try this script: `id=$(echo $1 | cut -d "=" -f2) url=$(echo "https://drive.google.com/uc?export=download&id="$id) wget "$url" -O $2` and run with the original drive link (not the ID, but the full URL) plus an output file name. – nightcod3r Jun 20 '17 at 20:23
  • doesn't work when file has this warning "Exceeds the maximum size that Google can scan. Would you still like to download this file?" For example: https://drive.google.com/a/broadinstitute.org/uc?id=0B60wROKy6OqceTNZRkZnaERWREk&export=download – user553965 Jul 12 '17 at 18:33
  • @user553965 I am also trying to download dbNSFP, but 3.5a. any luck? – jimh Aug 30 '17 at 21:31
  • worked for me https://gist.github.com/iamtekeste/3cdfd0366ebfd2c0d805#gistcomment-2359248 that is thsi answer with mor automated stuffs... just be sure that your file is shared with all and not visibla only to a restrict user group – Luigi Pirelli Jun 13 '19 at 13:40
  • [here](https://medium.com/@acpanjan/download-google-drive-files-using-wget-3c2c025a8b99) is a solution that works for big files as well – GPhilo Mar 21 '22 at 09:30
  • 14
    Just use confirm=yes. wget "https://drive.google.com/u/3/uc?id= 0Bz7KyqmuGsilT0J5dmRCM0ROVHc&export=download&confirm=yes" – mircobabini Apr 20 '22 at 20:26
  • 5
    Confirmed, adding `&confirm=yes` works! @mircobabini – Annahri Jun 06 '22 at 07:05
  • @mircobabini could you write your comment as an answer in order for it to get more visibility? – Zaccharie Ramzi Mar 20 '23 at 11:01
  • 1
    @ZaccharieRamzi sure. Done: https://stackoverflow.com/a/75790977/1160173 – mircobabini Mar 20 '23 at 13:40
20

The shortest way I have found for downloading big files:

git clone https://github.com/chentinghao/download_google_drive.git
cd download_google_drive/
python download_gdrive.py FILE_ID DESTINATION_PATH

Just view access, FILE_ID and DESTINATION_PATH (including file name) are required. It's working now as of November/12/2019 and has been for more than a year.

Another solution is googleapiclient. It enables you to automate downloading/uploading private files having their ids.

Note: Also make sure that file is publicly shared not with a group or within an organization

Matias Haeussler
  • 1,061
  • 2
  • 12
  • 25
6

First, click the share button in the top right corner, and set the permission to allow anyone with the link can view.

Click File->Download as-> PDF Document(.pdf) in the left cornel, and start to download by browser.

Find the URL in downloader, chrome is in chrome://downloads/.

The URL link by the time I am writing this answer is https://docs.google.com/document/export?format=pdf&id=xxx&token=xxx&includes_info_params=true

I was able to download as pdf with wget by wget -O xxx.pdf "https://docs.google.com/document/export?format=pdf&id=xxx"

hailinzeng
  • 966
  • 9
  • 24
4

This works for me: https://www.matthuisman.nz/2019/01/download-google-drive-files-wget-curl.html Google shows warning if file is larger then 100MB, so this case should be handled separately. Author shows how to do that, and provides script to automate it: https://github.com/matthuisman/gdrivedl

al.zatv
  • 173
  • 1
  • 13
4

This solution works even for large files, without cookies.

The trick is the &confirm=yes query param. Then:

wget "drive.google.com/u/3/uc?id= 0Bz7KyqmuGsilT0J5dmRCM0ROVHc&export=download&confirm=yes"
mircobabini
  • 544
  • 2
  • 13
2
  1. Find the ID of the file in the shareable link:

https://drive.google.com/file/d/ID/view?usp=sharing

  1. Download the file with curl -L using the aforementioned ID:
!curl -o out -L 'https://drive.google.com/uc?export=download&confirm=yes&id=ID'
  1. Rename the downloaded file from out to your filename of preference.
Wok
  • 4,956
  • 7
  • 42
  • 64
0

Here is an answer that has worked for me for large files as of August 10, 2023. The source is from this website. In order to use this command, you must make sure that your file is available to anyone on the internet.

Merely run wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$FILEID -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt

Here, FILEID is the ID from the download URL (right-click file >> Share >> Copy link), and FILENAME is the name of the file. It will download the file (even if it's large) to the folder that you run the script with the name of the file as the name of the downloaded file.

If you'd like this as a Bash script (written by GPT-4, but it's tested and works), you can save this as download.sh, make it runnable with chmod +x download.sh, and then run it with ./download.sh.

#!/bin/bash

# Ask for file id
echo "Please enter the file id:"
read FILEID

# Ask for filename
echo "Please enter the filename:"
read FILENAME

# Run the wget command
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$FILEID -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt

In addition, here's a (GPT-4-generated) breakdown of each portion of the command:

1. `wget`: This is a free utility for non-interactive download of files from the web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.

2. `--load-cookies /tmp/cookies.txt`: This option tells wget to load cookies from the file `/tmp/cookies.txt`.

3. `"https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$FILEID -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID"`: This is the URL to download the file from. It includes a sub-command that uses wget to fetch a confirmation token needed for the download.

4. `--quiet`: This option tells wget to work quietly, i.e., without displaying any output.

5. `--save-cookies /tmp/cookies.txt`: This option tells wget to save cookies to the file `/tmp/cookies.txt`.

6. `--keep-session-cookies`: This option tells wget to keep session cookies. These are cookies that are deleted when the browser is closed.

7. `--no-check-certificate`: This option tells wget not to check the server certificate against the available certificate authorities.

8. `-O-`: This option tells wget to write the documents to standard output.

9. `| sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p'`: This part uses the sed command to extract the confirmation token from the output of the previous wget command.

10. `-O $FILENAME`: This option tells wget to write the downloaded file to a file named `$FILENAME`.

11. `&& rm -rf /tmp/cookies.txt`: This part of the command deletes the `/tmp/cookies.txt` file if the previous commands were successful. The `&&` operator only runs the command to its right if the command to its left was successful. The `rm -rf /tmp/cookies.txt` command removes the file `/tmp/cookies.txt`.
Pro Q
  • 4,391
  • 4
  • 43
  • 92