583

I'm trying to download a file from google drive in a script, and I'm having a little trouble doing so. The files I'm trying to download are here.

I've looked online extensively and I finally managed to get one of them to download. I got the UIDs of the files and the smaller one (1.6MB) downloads fine, however the larger file (3.7GB) always redirects to a page which asks me whether I want to proceed with the download without a virus scan. Could someone help me get past that screen?

Here's how I got the first file working -

curl -L "https://docs.google.com/uc?export=download&id=0Bz-w5tutuZIYeDU0VDRFWG9IVUE" > phlat-1.0.tar.gz

When I run the same on the other file,

curl -L "https://docs.google.com/uc?export=download&id=0Bz-w5tutuZIYY3h5YlMzTjhnbGM" > index4phlat.tar.gz

I get the the following output - enter image description here

I notice on the third-to-last line in the link, there a &confirm=JwkK which is a random 4 character string but suggests there's a way to add a confirmation to my URL. One of the links I visited suggested &confirm=no_antivirus but that's not working.

I hope someone here can help with this!

Benyamin Jafari
  • 27,880
  • 26
  • 135
  • 150
Arjun
  • 5,978
  • 3
  • 12
  • 10
  • can you please provide the `curl script` you used to download the file from `google drive` as I am unable to download a working file ( image) from this script `curl -u username:pass https://drive.google.com/open?id=0B0QQY4sFRhIDRk1LN3g2TjBIRU0 >image.jpg` – Kasun Siyambalapitiya Nov 18 '16 at 09:06
  • Look at the accepted answer. I used the gdown.pl script `gdown.pl https://drive.google.com/uc?export=download&confirm=yAjx&id=0Bz-w5tutuZIYY3h5YlMzTjhnbGM index4phlat.tar.gz` – Arjun Nov 20 '16 at 21:03
  • 5
    Don't be afraid to scroll! [This answer](http://stackoverflow.com/a/39225039/786559) provides a very nice python script to download in one go. – Ciprian Tomoiagă Dec 21 '16 at 19:36
  • ./gdrive download [FILEID] [--recursive if its a folder] it will ask for you to access a given url and copy paste a token code. – roj4s Nov 23 '18 at 21:09
  • Works as of 04/17/2020, try this: http://github.com/gdrive-org/gdrive, and follow this https://github.com/gdrive-org/gdrive/issues/533#issuecomment-596336395 to create a service account, share the file/folder with the service account address and you can download, even for a publicly shared file/folder! – whyisyoung Apr 17 '20 at 16:41
  • I've posted [an answer](https://stackoverflow.com/a/63781195/3702377) which works well – Benyamin Jafari Feb 11 '21 at 21:06
  • the answer here seems to work just fine for me: https://unix.stackexchange.com/questions/136371/how-to-download-a-folder-from-google-drive-using-terminal/148674 did you try it? `$ wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME` – Charlie Parker Apr 23 '21 at 20:23
  • For me this answer works the best and quick. As of April 2021. http://stackoverflow.com/a/32742700/4773609 – Aman Verma Apr 25 '21 at 15:42
  • I voted to close this question because it is not about programming. [What topics can I ask about here?](https://stackoverflow.com/help/on-topic) – Rob Oct 27 '21 at 21:03

47 Answers47

719

July 2023

You can use gdown. Consider also visiting that page for full instructions; this is just a summary and the source repo may have more up-to-date instructions.


Instructions

Install it with the following command:

pip install gdown

After that, you can download any file from Google Drive by running one of these commands:

gdown https://drive.google.com/uc?id=<file_id>  # for files
gdown <file_id>                                 # alternative format
gdown --folder https://drive.google.com/drive/folders/<file_id>  # for folders
gdown --folder --id <file_id>                                   # this format works for folders too

Example: to download the readme file from this directory

gdown https://drive.google.com/uc?id=0B7EVK8r0v71pOXBhSUdJWU1MYUk

The file_id should look something like 0Bz8a_Dbh9QhbNU3SGlFaDg. You can find this ID by right-clicking on the file of interest, and selecting Get link. As of November 2021, this link will be of the form:

# Files
https://drive.google.com/file/d/<file_id>/view?usp=sharing
# Folders
https://drive.google.com/drive/folders/<file_id>

Caveats

  • Only works on open access files. ("Anyone who has a link can View")
  • Cannot download more than 50 files into a single folder.
    • If you have access to the source file, you can consider using tar/zip to make it a single file to work around this limitation.
phi
  • 10,572
  • 3
  • 21
  • 30
  • 14
    How can we download a folder from Gdrive using gdown? – user1 Mar 15 '19 at 00:48
  • Love this solution. For those who want to put this in a python script, here's a working example: ```import gdown ; import pandas as pd ; file_id="1-oJSymMGBBkXg8T5O8LSf64SvGGIPjxQ" ; url = f'https://drive.google.com/uc?id={file_id}' ; output = 'hello.csv' ; gdown.download(url, output, quiet=False) ; df = pd.read_csv('hello.csv') ; print(df.head()) ``` – Beau Hilton Jun 12 '19 at 12:02
  • 15
    the simple `gdown --id file_id` will do, no need to the full url – Matěj Šmíd Oct 20 '20 at 15:15
  • how do you indicate the name of the file or directory being downloaded? – Charlie Parker Apr 27 '21 at 13:52
  • 2
    Just a hint, after `pip install gdown` on Ubuntu 18.04, the gdown command was not found, I had to search for it and finally found it in ~/.local/bin/gdown. After providing the full path to the binary with `--id file_id` it worked fine. Thank you! – Ethan Arnold May 14 '21 at 08:29
  • For a conda install use `conda install -c conda-forge gdown` – baldwibr Nov 13 '21 at 17:19
  • Works as of May 2022 – Steve Lukis May 07 '22 at 06:39
  • `gdown` has limitations and daily bandwidth limit – EdgeDev Sep 14 '22 at 15:06
  • the easy and straight solution for downloading files from Gdrive to colab is to add file shortcut to your drive (just go to file download page and at top right corner click "add shortcut to drive" then mount Gdrive on your colab notebook then expand the drive folder from lef explorer pan and right click on the destination file and select "copy path" then use following one line of code: !cp copied_file_path /content – Ali karimi Jan 30 '23 at 18:15
225

I wrote a Python snippet that downloads a file from Google Drive, given a shareable link. It works, as of August 2017.

The snipped does not use gdrive, nor the Google Drive API. It uses the requests module.

When downloading large files from Google Drive, a single GET request is not sufficient. A second one is needed, and this one has an extra URL parameter called confirm, whose value should equal the value of a certain cookie.

import requests

def download_file_from_google_drive(id, destination):
    def get_confirm_token(response):
        for key, value in response.cookies.items():
            if key.startswith('download_warning'):
                return value

        return None

    def save_response_content(response, destination):
        CHUNK_SIZE = 32768

        with open(destination, "wb") as f:
            for chunk in response.iter_content(CHUNK_SIZE):
                if chunk: # filter out keep-alive new chunks
                    f.write(chunk)

    URL = "https://docs.google.com/uc?export=download"

    session = requests.Session()

    response = session.get(URL, params = { 'id' : id }, stream = True)
    token = get_confirm_token(response)

    if token:
        params = { 'id' : id, 'confirm' : token }
        response = session.get(URL, params = params, stream = True)

    save_response_content(response, destination)    


if __name__ == "__main__":
    import sys
    if len(sys.argv) is not 3:
        print("Usage: python google_drive.py drive_file_id destination_file_path")
    else:
        # TAKE ID FROM SHAREABLE LINK
        file_id = sys.argv[1]
        # DESTINATION FILE ON YOUR DISK
        destination = sys.argv[2]
        download_file_from_google_drive(file_id, destination)
Community
  • 1
  • 1
turdus-merula
  • 8,546
  • 8
  • 38
  • 50
  • I am running the snippet `python snippet.py file_id destination`. Is this the correct way of running it? Cause if destination is a folder I'm thrown an error. If I tough a file and I use that as a destination the snippet seems to work fine but then does nothing. – Manfredo Aug 30 '17 at 20:03
  • 4
    @Manfredo you need the file name you would like to save the file as, for example, `$ python snippet.py your_google_file_id /your/full/path/and/filename.xlsx` worked for me. in case that does not work, do you have any out put provided? does any file get created? – Jeff Sep 01 '17 at 19:11
  • 1
    @CiprianTomoiaga I have 90% of a progress bar working, using the tqdm Python module. I made a gist: https://gist.github.com/joshtch/8e51c6d40b1e3205d1bb2eea18fb57ae . Unfortunately I haven't found a reliable way of getting the total file size, which you'll need in order to compute the % progress and estimated completion time. – joshtch Jan 04 '18 at 02:09
  • Also, what kind of authentication does the requests module use to access google drives ? OAuth ? For example, where in your above code is this handled - https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#web-application-flow ? – tauseef_CuriousGuy Feb 26 '18 at 15:38
  • Can you also tell if your code uses simple access or authorized access - https://developers.google.com/api-client-library/python/start/get_started ? – tauseef_CuriousGuy Feb 26 '18 at 15:52
  • 7
    This is awesome! Here is a tip for drive_File_ID: https//drive.google.com/file/d/"drive_File_ID"/view - between https~~file/d/ and /view of the download link. – Jaeyoung Lee Mar 12 '18 at 09:39
  • How do I get the exact name of the file? I want to download it to a specifc folder with the name it has on google drive – Nouman Apr 12 '20 at 05:25
  • Similarly, I created a Java 11 class that downloads GDrive, Github, Dropbox and OneDrive files ... feel free to steal and/or add pull requests: https://github.com/cytoscape/file-transfer-app/blob/master/src/main/java/org/cytoscape/file_transfer/internal/CloudURL.java – bdemchak Mar 04 '21 at 02:46
96

April 2022

  • First, extract the ID of your desire file from google drive:

    1. In your browser, navigate to drive.google.com.

    2. Right-click on the file, and click "Get a shareable link"

      Right click get shareable link

    3. Then extract the ID of the file from URL:

      enter image description here

  • Next, install gdown PyPI module using pip:

    pip install gdown

  • Finally, download the file using gdown and the intended ID:

    gdown --id <put-the-ID>


[NOTE]:

  • In google-colab you have to use ! before bash commands.
    (i.e. !gdown --id 1-1wAx7b-USG0eQwIBVwVDUl3K1_1ReCt)
  • You should change the permission of the intended file from "Restricted" to "Anyone with the link".
Benyamin Jafari
  • 27,880
  • 26
  • 135
  • 150
89

As of March 2022, you can use the open source cross-platform command line tool gdrive. In contrast to other solutions, it can also download folders without limitations, and can also work with non-public files.

Source: I found out about gdrive from a comment by Tobi on another answer here.

Current state

There had been issues before with this tool not being verified by Google and it being unmaintained. Both issues are resolved since a commit from 2021-05-28. This also means, the previously needed workaround with a Google service account is no longer needed. (In rare cases you may still run into problems; if so, try the ntechp-fork.)

Installing gdrive

  1. Download the 2.1.1 binary. Choose a package that fits your OS and, for example gdrive_2.1.1_linux_amd64.tar.gz.

  2. Copy it to your path.

    gunzip gdrive_2.1.1_linux_amd64.tar.gz
    sudo mkdir /usr/local/bin/gdrive
    sudo cp gdrive-linux-amd64 /usr/local/bin/gdrive
    sudo chmod a+x /usr/local/bin/gdrive
    

Using gdrive

  1. Determine the Google Drive file ID. For that, right-click the desired file in the Google Drive website and choose "Get Link …". It will return something like https://drive.google.com/open?id=0B7_OwkDsUIgFWXA1B2FPQfV5S8H. Obtain the string behind the ?id= and copy it to your clipboard. That's the file's ID.

  2. Download the file. Of course, use your file's ID instead in the following command.

    gdrive download 0B7_OwkDsUIgFWXA1B2FPQfV5S8H
    
  3. At first usage, the tool will need to obtain access permissions to the Google Drive API. For that, it will show you a link which you have to visit in a browser, and then you will get a verification code to copy&paste back to the tool. The download then starts automatically. There is no progress indicator, but you can observe the progress in a file manager or second terminal.

Additional trick: rate limiting. To download with gdrive at a limited maximum rate (to not swamp the uplink in your local network…), you can use a command like this:

gdrive download --stdout 0B7_OwkDsUIgFWXA1B2FPQfV5S8H | \
    pv -br -L 90k | cat > file.ext

pv is PipeViewer. The command will show the amount of data downloaded (-b) and the rate of download (-r) and limit that rate to 90 kiB/s (-L 90k).

tanius
  • 14,003
  • 3
  • 51
  • 63
78

WARNING: This functionality is deprecated. See warning below in comments.


Have a look at this question: Direct download from Google Drive using Google Drive API

Basically you have to create a public directory and access your files by relative reference with something like

wget https://googledrive.com/host/LARGEPUBLICFOLDERID/index4phlat.tar.gz

Alternatively, you can use this script: https://github.com/circulosmeos/gdown.pl

Kos
  • 4,890
  • 9
  • 38
  • 42
guadafan
  • 1,020
  • 8
  • 4
  • 6
    another good way is to use the linux command line tool "gdrive" https://github.com/prasmussen/gdrive – Tobi Jan 03 '15 at 21:04
  • 1
    I was able to use Nanolx's perl script in combination with the google drive permalink created at http://gdurl.com/ --Thanks! – jadik Feb 25 '15 at 08:09
  • The LARGEPUBLICID make-up url did it for me, thanks @guadafan – Javier López Jul 01 '15 at 23:44
  • 17
    WARNING: Web hosting support in Google Drive is deprecated. "Beginning August 31, 2015, web hosting in Google Drive for users and developers will be deprecated. Google Apps customers can continue to use this feature for a period of one year until August 31, 2016, when serving content via googledrive.com/host/doc id will be discontinued." http://googleappsupdates.blogspot.com/2015/08/deprecating-web-hosting-support-in.html – chrish Sep 18 '15 at 14:44
  • the file should also be under your ownership – DavidTaubmann Jun 22 '16 at 05:45
  • 15
    Unfortunately that doesn't work any longer as of 2018. – Calimo Feb 13 '18 at 08:56
  • 5
    gdown.pl worked great for me too. A quick look at the script shows it's not using that API, it creates a new URL with a parameter `export=download` so it should be good for the foreseeable future unless google changes that URL scheme – Ben Baron Sep 04 '18 at 23:40
  • 1
    gdown does not work for me (2019) `Couldn't download the file :-( ` – Ender Apr 22 '19 at 09:50
  • 1
    @Tobi WARNING: https://github.com/prasmussen/gdrive is no longer maintained, developers also deleted the binary files – alper Dec 13 '20 at 11:01
  • the answer here seems to work just fine for me: https://unix.stackexchange.com/questions/136371/how-to-download-a-folder-from-google-drive-using-terminal/148674 did you try it? `$ wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME`. Is this also deprecated? Seems to work fine for me. – Charlie Parker Apr 23 '21 at 20:23
77

Here's a quick way to do this.

Make sure the link is shared, and it will look something like this:

https://drive.google.com/open?id=FILEID&authuser=0

Then, copy that FILEID and use it like this

wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME

If the file is large and triggers the virus check page, you can use do this (but it will download two files, one html file and the actual file):

wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -r -A 'uc*' -e robots=off -nd
qwertzguy
  • 15,699
  • 9
  • 63
  • 66
dessalines
  • 6,352
  • 5
  • 42
  • 59
  • 2
    Hi, Thanks for the reply. If you look at the files on the link i shared, you will see that while the files are shared, they lack the 'authuser=0' tag in the link. Your method didn't work on the files provided! Arjun – Arjun Jun 19 '15 at 21:24
  • 3
    Did not even try with public access, this one worked well for link-only shared files atow. Used it like this: `wget 'https://docs.google.com/uc?export=download&id=SECRET_ID' -O 'filename.pdf'` – Sampo Sarrala - codidact.org May 17 '16 at 13:49
  • Doesn't work as of 2018, I am getting the antivirus scan web page instead of the file. – Calimo Feb 13 '18 at 08:58
  • There's a good github project called `drive` that is much better equipped to pull from google drive. – dessalines Feb 14 '18 at 17:17
  • 17
    It bypasses antivirus scanner for me in 2018 when used with `-r` flag of `wget`. So it is `wget --no-check-certificate -r 'https://docs.google.com/uc?export=download&id=FILE_ID' -O 'filename'` – Artem Pelenitsyn Sep 21 '18 at 21:14
  • 1
    Worked for me as of 10/2019 and was the perfect solution for me getting a file into a running Docker container that has almost no utility apps running on it. – ammills01 Oct 17 '19 at 11:11
  • Worked as of 08/2020 Sidenote: not a single one of "more upvoted" solutions worked for file size 40 GB – juststuck Aug 14 '20 at 07:14
  • 3
    Thanks, works for me on 09/2020, The FILEID also can be retrieve from such URL pattern: `https://drive.google.com/file/d/FILEID/view?usp=sharing`. – Dai Sep 18 '20 at 03:39
  • 3
    Also Worked for me in 2021 :) Thanks @ArtemPelenitsyn – Erfan Akhavan Aug 03 '21 at 22:17
  • The key for download a link like T ("https://drive.google.com/drive/folders/12wLblskNVBUeryt1xaJTQlIoJac2WehV"), is to use this link to open the page to get the download id by right click-->get link. Then use this id to download with this command `wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME` – YinchaoOnline Oct 05 '21 at 10:36
  • Worked for me on 03/2022. The filename arrived wrong, but the content of the huge file arrived correct. – Paulo Coghi Mar 17 '22 at 17:49
  • The positive side of this approach is the zero dependency of extra tools, scripts and/or library installations. – Paulo Coghi Mar 17 '22 at 17:50
  • Only answer I saw that currently works with wget/curl as requested in the question, nice work. – zrisher May 03 '22 at 18:44
  • To improve this answer, you could describe what the flags do (`-r`, `-A`, `-e`). – Stefan_EOX Aug 09 '22 at 08:12
  • 1
    If you want to download a file larger than 100mb, you should use: `wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt` – Mathews Edwirds Mar 31 '23 at 12:57
  • 1
    This method did not work for me, I was caught by the virus scan prompt, even with the `-r` flag. I was able to get around the by adding `&confirm=t` (e.g. `wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILE_ID&confirm=t' -r -A 'uc*' -e robots=off -nd` – Ian M Aug 07 '23 at 00:05
64

The easy way:

(if you just need it for a one-off download)

  1. Go to the Google Drive webpage that has the download link
  2. Open your browser console and go to the "network" tab
  3. Click the download link
  4. Wait for it the file to start downloading, and find the corresponding request (should be the last one in the list), then you can cancel the download
  5. Right click on the request and click "Copy as cURL" (or similar)

You should end up with something like:

curl 'https://doc-0s-80-docs.googleusercontent.com/docs/securesc/aa51s66fhf9273i....................blah blah blah...............gEIqZ3KAQ==' --compressed

Past it in your console, add > my-file-name.extension to the end (otherwise it will write the file into your console), then press enter :)

The link does have some kind of expiration in it, so it won't work to start a download after a few minutes of generating that first request.

Grant G
  • 77
  • 7
  • 1
    In Chrome on a Mac it's: View/Developer/Developer Tools/Network tab – Dave X Sep 10 '20 at 13:28
  • 1
    Works Dec 2020, including when I right-click on a 3GB folder in Google Drive and Download, wait for it to zip, zip starts to download split into two parts, I grab the `curl` commands for each, append the `> file.ext` and both run fine (and download in 10 seconds to an AWS instance). – Chris Dec 24 '20 at 19:26
  • Does this link work indefinitely? Or does it expire? – tslater May 19 '21 at 05:46
  • Link isn't shown anymore as for Aug 2021! – AbdelKh Aug 15 '21 at 13:13
  • Still works. @AbdelKh make sure you open F12 tool large enough so that the network tab can show the requests. Copy as cURL from the last one on the list. – limits Jan 29 '22 at 02:22
  • This does indeed work, however, it is inconvenient to manually type the output file name. – Stefan_EOX Aug 09 '22 at 08:19
  • This works with private files! It also works with any file on any cloud service. Thanks! – Daniel Darabos Aug 01 '23 at 11:20
63

Update as of March 2018.

I tried various techniques given in other answers to download my file (6 GB) directly from Google drive to my AWS ec2 instance but none of them work (might be because they are old).

So, for the information of others, here is how I did it successfully:

  1. Right-click on the file you want to download, click share, under link sharing section, select "anyone with this link can edit".
  2. Copy the link. It should be in this format: https://drive.google.com/file/d/FILEIDENTIFIER/view?usp=sharing
  3. Copy the FILEIDENTIFIER portion from the link.
  4. Copy the below script to a file. It uses curl and processes the cookie to automate the downloading of the file.

    #!/bin/bash
    fileid="FILEIDENTIFIER"
    filename="FILENAME"
    curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}" > /dev/null
    curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${fileid}" -o ${filename}
    
  5. As shown above, paste the FILEIDENTIFIER in the script. Remember to keep the double quotes!

  6. Provide a name for the file in place of FILENAME. Remember to keep the double quotes and also include the extension in FILENAME (for example, myfile.zip).
  7. Now, save the file and make the file executable by running this command in terminal sudo chmod +x download-gdrive.sh.
  8. Run the script using `./download-gdrive.sh".

PS: Here is the Github gist for the above given script: https://gist.github.com/amit-chahar/db49ce64f46367325293e4cce13d2424

Jeff Atwood
  • 63,320
  • 48
  • 150
  • 153
Amit Chahar
  • 2,519
  • 3
  • 18
  • 23
  • for wget replace `-c` with `--save-cookies` and `-b` with `--load-cookies` – untore Apr 08 '18 at 10:00
  • 2
    Works in Jan 2019. I needed to add `"` quotes around `${filename}` on the last line. – Jimbo Feb 11 '19 at 10:18
  • > Run the script using `./download-gdrive.sh" Do not be like me and try to run the script by typing `download-gdrive.sh`, the `./` seems to be mandatory. – Ambroise Rabier Apr 27 '19 at 09:23
  • It says file is not utf-8 encoded and saving is disabled – Chaine Feb 01 '20 at 18:57
  • Why are you using sudo to set the executable bit? This is not necessary. Don't use superuser privileges if they are not needed. – josch Mar 24 '20 at 21:30
  • 1
    I had to add --insecure to make it work. – AnaRhisT Feb 16 '22 at 11:27
  • In my case, I was downloading a big file (> 13 GB) and it kept getting interrupted. I ended up using `-C -` to resume the download until it was fully downloaded. Full line here: ```curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${fileid}" -o ${filename} -C -```. It is also a nice practice to remove the cookie file afterwards: `rm ./cookie` – Ali Altıntaş Feb 13 '23 at 12:13
59
ggID='put_googleID_here'  
ggURL='https://drive.google.com/uc?export=download'  
filename="$(curl -sc /tmp/gcokie "${ggURL}&id=${ggID}" | grep -o '="uc-name.*</span>' | sed 's/.*">//;s/<.a> .*//')"  
getcode="$(awk '/_warning_/ {print $NF}' /tmp/gcokie)"  
curl -Lb /tmp/gcokie "${ggURL}&confirm=${getcode}&id=${ggID}" -o "${filename}"  

How does it work?
Get cookie file and html code with curl.
Pipe html to grep and sed and search for file name.
Get confirm code from cookie file with awk.
Finally download file with cookie enabled, confirm code and filename.

curl -Lb /tmp/gcokie "https://drive.google.com/uc?export=download&confirm=Uq6r&id=0B5IRsLTwEO6CVXFURmpQZ1Jxc0U" -o "SomeBigFile.zip"

If you dont need filename variable curl can guess it
-L Follow redirects
-O Remote-name
-J Remote-header-name

curl -sc /tmp/gcokie "${ggURL}&id=${ggID}" >/dev/null  
getcode="$(awk '/_warning_/ {print $NF}' /tmp/gcokie)"  
curl -LOJb /tmp/gcokie "${ggURL}&confirm=${getcode}&id=${ggID}" 

To extract google file ID from URL you can use:

echo "gURL" | egrep -o '(\w|-){26,}'  
# match more than 26 word characters  

OR

echo "gURL" | sed 's/[^A-Za-z0-9_-]/\n/g' | sed -rn '/.{26}/p'  
# replace non-word characters with new line,   
# print only line with more than 26 word characters 
lapinpt
  • 764
  • 5
  • 5
23

The default behavior of google drive is to scan files for viruses if the file is to big it will prompte the user and notifies him that the file could not be scanned.

At the moment the only workaround I found is to share the file with the web and create a web resource.

Quote from the google drive help page:

With Drive, you can make web resources — like HTML, CSS, and Javascript files — viewable as a website.

To host a webpage with Drive:

  1. Open Drive at drive.google.com and select a file.
  2. Click the Share button at the top of the page.
  3. Click Advanced in the bottom right corner of the sharing box.
  4. Click Change....
  5. Choose On - Public on the web and click Save.
  6. Before closing the sharing box, copy the document ID from the URL in the field below "Link to share". The document ID is a string of uppercase and lowercase letters and numbers between slashes in the URL.
  7. Share the URL that looks like "www.googledrive.com/host/[doc id] where [doc id] is replaced by the document ID you copied in step 6.
    Anyone can now view your webpage.

Found here: https://support.google.com/drive/answer/2881970?hl=en

So for example when you share a file on google drive publicly the sharelink looks like this:

https://drive.google.com/file/d/0B5IRsLTwEO6CVXFURmpQZ1Jxc0U/view?usp=sharing

Then you copy the file id and create a googledrive.com linke that look like this:

https://www.googledrive.com/host/0B5IRsLTwEO6CVXFURmpQZ1Jxc0U
Alex
  • 670
  • 5
  • 11
19

Based on the answer from Roshan Sethia

May 2018

Using WGET:

  1. Create a shell script called wgetgdrive.sh as below:

    #!/bin/bash
    
    # Get files from Google Drive
    
    # $1 = file ID
    # $2 = file name
    
    URL="https://docs.google.com/uc?export=download&id=$1"
    
    wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate $URL -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$1" -O $2 && rm -rf /tmp/cookies.txt
    
  2. Give the right permissions to execute the script

  3. In terminal, run:

    ./wgetgdrive.sh <file ID> <filename>
    

    for example:

    ./wgetgdrive.sh 1lsDPURlTNzS62xEOAIG98gsaW6x2PYd2 images.zip
    
Aatif Khan
  • 318
  • 3
  • 5
  • 2
    One of the few answers that still work in 2023! Also, when you can't open the file but only preview it you can't get the file ID from the URL. Just copy the sharing-link, the hash that you can find in that link is the file ID! – fratajcz Feb 24 '23 at 12:56
  • works seamlessly in 2023. for step 2 I used "sudo chmod +x wgetgdrive.sh" – Moses J May 24 '23 at 18:43
12

--UPDATED--

To download the file first get youtube-dl for python from here:

youtube-dl: https://rg3.github.io/youtube-dl/download.html

or install it with pip:

sudo python2.7 -m pip install --upgrade youtube_dl 
# or 
# sudo python3.6 -m pip install --upgrade youtube_dl

UPDATE:

I just found out this:

  1. Right click on the file you want to download from drive.google.com

  2. Click Get Sharable link

  3. Toggle On Link sharing on

  4. Click on Sharing settings

  5. Click on the top dropdown for options

  6. Click on More

  7. Select [x] On - Anyone with a link

  8. Copy Link

https://drive.google.com/file/d/3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR/view?usp=sharing       
(This is not a real file address)

Copy the id after https://drive.google.com/file/d/:

3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR

Paste this into command line:

youtube-dl https://drive.google.com/open?id=

Paste the id behind open?id=

youtube-dl https://drive.google.com/open?id=3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR
[GoogleDrive] 3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR: Downloading webpage
[GoogleDrive] 3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR: Requesting source file
[download] Destination: your_requested_filename_here-3PIY9dCoWRs-930HHvY-3-FOOPrIVoBAR
[download] 240.37MiB at  2321.53MiB/s (00:01)

Hope it helps

jturi
  • 1,615
  • 15
  • 11
11

All of the above responses seem to obscure the simplicity of the answer or have some nuances that are not explained.

If the file is shared publicly, you can generate a direct download link by just knowing the file ID. The URL must be in the form " https://drive.google.com/uc?id=[FILEID]&export=download" This works as of 11-22-2019. This does not require the receiver to log in to google but does require the file to be shared publicly.

  1. In your browser, navigate to drive.google.com.

  2. Right click on the file, and click "Get a shareable link"

Right click get shareable link

  1. Open a new tab, select the address bar, and paste in the contents of your clipboard which will be the shareable link. You'll see the file displayed by Google's viewer. The ID is the number right before the "View" component of the URL:

enter image description here

  1. Edit the URL so it is in the following format, replacing "[FILEID]" with the ID of your shared file:

    https://drive.google.com/uc?id=[FILEID]&export=download

  2. That's your direct download link. If you click on it in your browser the file will now be "pushed" to your browser, opening the download dialog, allowing you to save or open the file. You can also use this link in your download scripts.

  3. So the equivalent curl command would be:

curl -L "https://drive.google.com/uc?id=AgOATNfjpovfFrft9QYa-P1IeF9e7GWcH&export=download" > phlat-1.0.tar.gz
CoderBlue
  • 695
  • 7
  • 14
  • This worked for me on Linux with a 160MB file: `wget -r 'https://drive.google.com/uc?id=FILEID&export=download' -O LOCAL_NAME` – JohnM Aug 28 '21 at 17:15
11

I have been using the curl snippet of @Amit Chahar who posted a good answer in this thread. I found it useful to put it in a bash function rather than a separate .sh file

function curl_gdrive {

    GDRIVE_FILE_ID=$1
    DEST_PATH=$2

    curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${GDRIVE_FILE_ID}" > /dev/null
    curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${GDRIVE_FILE_ID}" -o ${DEST_PATH}
    rm -f cookie
}

that can be included in e.g a ~/.bashrc (after sourcing it ofcourse if not sourced automatically) and used in the following way

   $ curl_gdrive 153bpzybhfqDspyO_gdbcG5CMlI19ASba imagenet.tar

UPDATE 2022-03-01 - wget version that works also when virus scan is triggered

function wget_gdrive {

    GDRIVE_FILE_ID=$1
    DEST_PATH=$2

    wget --save-cookies cookies.txt 'https://docs.google.com/uc?export=download&id='$GDRIVE_FILE_ID -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1/p' > confirm.txt
    wget --load-cookies cookies.txt -O $DEST_PATH 'https://docs.google.com/uc?export=download&id='$GDRIVE_FILE_ID'&confirm='$(<confirm.txt)
    rm -fr cookies.txt confirm.txt
}

sample usage:

    $ wget_gdrive 1gzp8zIDo888AwMXRTZ4uzKCMiwKynHYP foo.out
mher
  • 369
  • 3
  • 7
10

The easiest way is:

  1. Create download link and copy fileID
  2. Download with WGET: wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
maniac
  • 1,112
  • 1
  • 13
  • 19
  • Ran it on Kaggle kernel. worked like a charm. Just replace FILEID with the id that comes in the sharable link. It looks like 1K4R-hrYBPFoDTcM3T677Jx0LchTN15OM. – jkr Sep 26 '20 at 10:37
10

As of 2022, you can use this solution:

https://drive.google.com/uc?export=download&id=FILE_ID&confirm=t


Source of "virus scan warning page":

enter image description here

the "Download anyway" form is POSTing to same URL, but with additional three parameters:

  • t
  • confirm
  • uuid

If you change your original URL and add one of them: confirm=t, it will download file without a warning page.

So just change your URL to

https://drive.google.com/uc?export=download&id=FILE_ID&confirm=t 

For example:

$ curl -L 'https://drive.google.com/uc?export=download&id=FILE_ID' > large_video.mp4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
100  2263    0  2263    0     0   5426      0 --:--:-- --:--:-- --:--:--  5453

After adding confirm=t, result:

$ curl -L 'https://drive.google.com/uc?export=download&id=FILE_ID&confirm=t' > large_video.mp4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  128M  100  128M    0     0  10.2M      0  0:00:12  0:00:12 --:--:-- 10.9M
Kos
  • 4,890
  • 9
  • 38
  • 42
9

The above answers are outdated for April 2020, since google drive now uses a redirect to the actual location of the file.

Working as of April 2020 on macOS 10.15.4 for public documents:

# this is used for drive directly downloads
function download-google(){
  echo "https://drive.google.com/uc?export=download&id=$1"
  mkdir -p .tmp
  curl -c .tmp/$1cookies "https://drive.google.com/uc?export=download&id=$1" > .tmp/$1intermezzo.html;
  curl -L -b .tmp/$1cookies "$(egrep -o "https.+download" .tmp/$1intermezzo.html)" > $2;
}

# some files are shared using an indirect download
function download-google-2(){
  echo "https://drive.google.com/uc?export=download&id=$1"
  mkdir -p .tmp
  curl -c .tmp/$1cookies "https://drive.google.com/uc?export=download&id=$1" > .tmp/$1intermezzo.html;
  code=$(egrep -o "confirm=(.+)&amp;id=" .tmp/$1intermezzo.html | cut -d"=" -f2 | cut -d"&" -f1)
  curl -L -b .tmp/$1cookies "https://drive.google.com/uc?export=download&confirm=$code&id=$1" > $2;
}

# used like this
download-google <id> <name of item.extension>
danieltan95
  • 810
  • 7
  • 14
  • 1
    `download-google-2` works for me. My file is 3G in size. Thanks @danieltan95 – Saurabh Kumar Apr 17 '20 at 07:33
  • I updated `download-google-2` 's last curl to this `curl -L -b .tmp/$1cookies -C - "https://drive.google.com/uc?export=download&confirm=$code&id=$1" -o $2;` and it now can resume the download. – ssi-anik Apr 18 '20 at 13:00
  • Seems like something went wrong with the download on low speed. another approach I found. https://qr.ae/pNrPaJ – ssi-anik Apr 18 '20 at 14:17
  • download-google worked fine. can you explain the difference between method 1 and 2? – Gayal Kuruppu Jun 27 '20 at 16:05
8

No answer proposes what works for me as of december 2016 (source):

curl -L https://drive.google.com/uc?id={FileID}

provided the Google Drive file has been shared with those having the link and {FileID} is the string behind ?id= in the shared URL.

Although I did not check with huge files, I believe it might be useful to know.

mmj
  • 5,514
  • 2
  • 44
  • 51
7

I had the same problem with Google Drive.

Here's how I solved the problem using Links 2.

  1. Open a browser on your PC, navigate to your file in Google Drive. Give your file a public link.

  2. Copy the public link to your clipboard (eg right click, Copy link address)

  3. Open a Terminal. If you're downloading to another PC/server/machine you should SSH to it as this point

  4. Install Links 2 (debian/ubuntu method, use your distro or OS equivalent)

    sudo apt-get install links2

  5. Paste the link in to your terminal and open it with Links like so:

    links2 "paste url here"

  6. Navigate to the download link within Links using your Arrow Keys and press Enter

  7. Choose a filename and it'll download your file

mattbell87
  • 565
  • 6
  • 9
7

Use youtube-dl!

youtube-dl https://drive.google.com/open?id=ABCDEFG1234567890

You can also pass --get-url to get a direct download URL.

aularon
  • 11,042
  • 3
  • 36
  • 41
  • 1
    @Ender it still works for me ```youtube-dl https://drive.google.com/open?id=ABCDEFG1234567890aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa [GoogleDrive] ABCDEFG1234567890aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: Downloading webpage```. maybe you have an outdated version of `youtube-dl` or the link format is not recognized by it for some reason... Try using the format above replacing the id with the file id from your original URL – aularon May 02 '19 at 14:17
  • youtube-dl has problems with rate limiting, occasionally failing with `HTTP Error 429: Too Many Requests` message, especially when you are using the IPs of your hosting provider. – Berkant İpek May 01 '21 at 16:33
5

the easy way to down file from google drive you can also download file on colab

pip install gdown

import gdown

Then

url = 'https://drive.google.com/uc?id=0B9P1L--7Wd2vU3VUVlFnbTgtS2c'
output = 'spam.txt'
gdown.download(url, output, quiet=False)

or

fileid='0B9P1L7Wd2vU3VUVlFnbTgtS2c'

gdown https://drive.google.com/uc?id=+fileid

Document https://pypi.org/project/gdown/

Jadli
  • 858
  • 1
  • 9
  • 17
  • 1
    cool. but how is it different from [phi's answer](https://stackoverflow.com/a/50670037/1169096) that was posted over a year before yours? – umläute May 13 '20 at 19:02
4

I was unable to get Nanoix's perl script to work, or other curl examples I had seen, so I started looking into the api myself in python. This worked fine for small files, but large files choked past available ram so I found some other nice chunking code that uses the api's ability to partial download. Gist here: https://gist.github.com/csik/c4c90987224150e4a0b2

Note the bit about downloading client_secret json file from the API interface to your local directory.

Source
$ cat gdrive_dl.py
from pydrive.auth import GoogleAuth  
from pydrive.drive import GoogleDrive    

"""API calls to download a very large google drive file.  The drive API only allows downloading to ram 
   (unlike, say, the Requests library's streaming option) so the files has to be partially downloaded
   and chunked.  Authentication requires a google api key, and a local download of client_secrets.json
   Thanks to Radek for the key functions: http://stackoverflow.com/questions/27617258/memoryerror-how-to-download-large-file-via-google-drive-sdk-using-python
"""

def partial(total_byte_len, part_size_limit):
    s = []
    for p in range(0, total_byte_len, part_size_limit):
        last = min(total_byte_len - 1, p + part_size_limit - 1)
        s.append([p, last])
    return s

def GD_download_file(service, file_id):
  drive_file = service.files().get(fileId=file_id).execute()
  download_url = drive_file.get('downloadUrl')
  total_size = int(drive_file.get('fileSize'))
  s = partial(total_size, 100000000) # I'm downloading BIG files, so 100M chunk size is fine for me
  title = drive_file.get('title')
  originalFilename = drive_file.get('originalFilename')
  filename = './' + originalFilename
  if download_url:
      with open(filename, 'wb') as file:
        print "Bytes downloaded: "
        for bytes in s:
          headers = {"Range" : 'bytes=%s-%s' % (bytes[0], bytes[1])}
          resp, content = service._http.request(download_url, headers=headers)
          if resp.status == 206 :
                file.write(content)
                file.flush()
          else:
            print 'An error occurred: %s' % resp
            return None
          print str(bytes[1])+"..."
      return title, filename
  else:
    return None          


gauth = GoogleAuth()
gauth.CommandLineAuth() #requires cut and paste from a browser 

FILE_ID = 'SOMEID' #FileID is the simple file hash, like 0B1NzlxZ5RpdKS0NOS0x0Ym9kR0U

drive = GoogleDrive(gauth)
service = gauth.service
#file = drive.CreateFile({'id':FILE_ID})    # Use this to get file metadata
GD_download_file(service, FILE_ID) 
slm
  • 15,396
  • 12
  • 109
  • 124
robotic
  • 51
  • 2
4

There's an open-source multi-platform client, written in Go: drive. It's quite nice and full-featured, and also is in active development.

$ drive help pull
Name
        pull - pulls remote changes from Google Drive
Description
        Downloads content from the remote drive or modifies
         local content to match that on your Google Drive

Note: You can skip checksum verification by passing in flag `-ignore-checksum`

* For usage flags: `drive pull -h`
Utgarda
  • 686
  • 4
  • 23
4

This works as of Nov 2017 https://gist.github.com/ppetraki/258ea8240041e19ab258a736781f06db

#!/bin/bash

SOURCE="$1"
if [ "${SOURCE}" == "" ]; then
    echo "Must specify a source url"
    exit 1
fi

DEST="$2"
if [ "${DEST}" == "" ]; then
    echo "Must specify a destination filename"
    exit 1
fi

FILEID=$(echo $SOURCE | rev | cut -d= -f1 | rev)
COOKIES=$(mktemp)

CODE=$(wget --save-cookies $COOKIES --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=${FILEID}" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/Code: \1\n/p')

# cleanup the code, format is 'Code: XXXX'
CODE=$(echo $CODE | rev | cut -d: -f1 | rev | xargs)

wget --load-cookies $COOKIES "https://docs.google.com/uc?export=download&confirm=${CODE}&id=${FILEID}" -O $DEST

rm -f $COOKIES
ppetraki
  • 428
  • 4
  • 11
  • Although there is stated "source url" and there is some parsing I didn't try to understand it worked by simply directly using what is called fileid here and in other answers as first parameter. – jan Nov 13 '17 at 09:12
  • @jan That may mean there is more than one url style. I'm glad it still worked for you over all. – ppetraki Nov 14 '17 at 17:30
4

I found a working solution to this... Simply use the following

wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1HlzTR1-YVoBPlXo0gMFJ_xY4ogMnfzDi' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1HlzTR1-YVoBPlXo0gMFJ_xY4ogMnfzDi" -O besteyewear.zip && rm -rf /tmp/cookies.txt
  • when doing this I get WARNING: cannot verify docs.google.com's certificate, issued by `/C=US/O=Google Trust Services/CN=Google Internet Authority G3': Unable to locally verify the issuer's authority. HTTP request sent, awaiting response... 404 Not Found 2019-02-08 02:56:30 ERROR 404: Not Found. any workarounds? – B''H Bi'ezras -- Boruch Hashem Feb 08 '19 at 07:57
  • WOW! Great answer and very logical. Thanks for writing it up. Downloaded 1.3 GB file using this command... Fully auto mode from linux terminal by this command only. Also tried on GCP. Works great there as well. Year 2020... I believe this is the right way... even if they change a bit of commands this should stand test of time. – Atta Jutt Mar 30 '20 at 21:59
4

After messing around with this garbage. I've found a way to download my sweet file by using chrome - developer tools.

  1. At your google docs tab, Ctr+Shift+J (Setting --> Developer tools)
  2. Switch to Network tabs
  3. At your docs file, click "Download" --> Download as CSV, xlsx,....
  4. It will show you the request in the "Network" console enter image description here

  5. Right click -> Copy -> Copy as Curl

  6. Your Curl command will be like this, and add -o to create a exported file. curl 'https://docs.google.com/spreadsheets/d/1Cjsryejgn29BDiInOrGZWvg/export?format=xlsx&id=1Cjsryejgn29BDiInOrGZWvg' -H 'authority: docs.google.com' -H 'upgrade-insecure-requests: 1' -H 'user-agent: Mozilla/5.0 (X..... -o server.xlsx

Solved!

Ender
  • 835
  • 1
  • 12
  • 23
  • that link expires and is only for 1 ip address at a time – B''H Bi'ezras -- Boruch Hashem May 19 '20 at 23:59
  • You can just make a silent constant request to keep the session alive. @bluejayke – Ender May 20 '20 at 02:51
  • I did exactly that and when came here to write another answer, stumbled upon yours. I confirm that it works with different IPs as I needed to download a 36gb file to the server that doesn't have a browser. And I extracted the link from my laptop. – dc914337 Jun 12 '20 at 13:32
4

This is the way in 2023:

FILEID="unique_google_drive_id"
FILENAME="output_filename"

wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=${FILEID}' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=${FILEID}" -O ${FILENAME} && rm -rf /tmp/cookies.txt
O.rka
  • 29,847
  • 68
  • 194
  • 309
3

Here's a little bash script I wrote that does the job today. It works on large files and can resume partially fetched files too. It takes two arguments, the first is the file_id and the second is the name of the output file. The main improvements over previous answers here are that it works on large files and only needs commonly available tools: bash, curl, tr, grep, du, cut and mv.

#!/usr/bin/env bash
fileid="$1"
destination="$2"

# try to download the file
curl -c /tmp/cookie -L -o /tmp/probe.bin "https://drive.google.com/uc?export=download&id=${fileid}"
probeSize=`du -b /tmp/probe.bin | cut -f1`

# did we get a virus message?
# this will be the first line we get when trying to retrive a large file
bigFileSig='<!DOCTYPE html><html><head><title>Google Drive - Virus scan warning</title><meta http-equiv="content-type" content="text/html; charset=utf-8"/>'
sigSize=${#bigFileSig}

if (( probeSize <= sigSize )); then
  virusMessage=false
else
  firstBytes=$(head -c $sigSize /tmp/probe.bin)
  if [ "$firstBytes" = "$bigFileSig" ]; then
    virusMessage=true
  else
    virusMessage=false
  fi
fi

if [ "$virusMessage" = true ] ; then
  confirm=$(tr ';' '\n' </tmp/probe.bin | grep confirm)
  confirm=${confirm:8:4}
  curl -C - -b /tmp/cookie -L -o "$destination" "https://drive.google.com/uc?export=download&id=${fileid}&confirm=${confirm}"
else
  mv /tmp/probe.bin "$destination"
fi
  • Welcome to SO. If you have used any reference for this purpose please include them in your answer. Anyhow, nice job +1 – M-- Apr 18 '17 at 17:47
3

There's an easier way.

Install cliget/CURLWGET from firefox/chrome extension.

Download the file from browser. This creates a curl/wget link that remembers the cookies and headers used while downloading the file. Use this command from any shell to download

Yesh
  • 976
  • 12
  • 15
3

Alternative Method, 2020

Works well for headless servers. I was trying to download a ~200GB private file but couldn't get any of the other methods, mentioned in this thread, to work.

Solution

  1. (Skip this step if the file is already in your own google drive) Make a copy of the file you want to download from a Public/Shared Folder into your Google Drive account. Select File -> Right Click -> Make a copy

Demo Make a copy

  1. Install and setup Rclone, an open-source command line tool, to sync files between your local storage and Google Drive. Here's a quick tutorial to install and setup rclone for Google Drive.

  2. Copy your file from Google Drive to your machine using Rclone

rclone copy mygoogledrive:path/to/file /path/to/file/on/local/machine -P

-P argument helps to track progress of the download and lets you know when its finished.

S V Praveen
  • 421
  • 3
  • 8
2

Here is workaround which I came up download files from Google Drive to my Google Cloud Linux shell.

  1. Share the file to PUBLIC and with Edit permissions using advanced sharing.
  2. You will get a sharing link which would have an ID. See the link:- drive.google.com/file/d/[ID]/view?usp=sharing
  3. Copy that ID and Paste it in the following link:-

googledrive.com/host/[ID]

  1. The above link would be our download link.
  2. Use wget to download the file:-

wget https://googledrive.com/host/[ID]

  1. This command will download the file with name as [ID] with no extension and but with same file size on the same location where you ran the wget command.
  2. Actually, I downloaded a zipped folder in my practice. so I renamed that awkward file using:-

mv [ID] 1.zip

  1. then using

unzip 1.zip

we will get the files.

Vikas Gautam
  • 1,793
  • 22
  • 21
2

For anyone who stumbles on this thread the following works as of May 2022 to get around the antivirus check on large files:

#!/bin/bash
fileid="FILEIDENTIFIER"
filename="FILENAME"
html=`curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}"`
curl -Lb ./cookie "https://drive.google.com/uc?export=download&`echo ${html}|grep -Po '(confirm=[a-zA-Z0-9\-_]+)'`&id=${fileid}" -o ${filename}
castaway2000
  • 306
  • 5
  • 21
1

May 2018 WORKING

Hi based on this comments ... i create a bash to export a list of URL from file URLS.txt to a URLS_DECODED.txt an used in some accelerator like flashget ( i use cygwin to combine windows & linux )

Command spider was introduced to avoid download and get the final link ( directly )

Command GREP HEAD and CUT, process and get the final link, Is based in spanish language, maybe you could be port to ENGLISH LANGUAGE

echo -e "$URL_TO_DOWNLOAD\r" probably the \r is only cywin and must be replace by a \n (break line)

**********user*********** is the user folder

*******Localización*********** is in spanish language, clear the asterics and let the word in english Location and adapt THE HEAD and the CUT numbers to appropiate approach.

rm -rf /home/**********user***********/URLS_DECODED.txt
COUNTER=0
while read p; do 
    string=$p
    hash="${string#*id=}"
    hash="${hash%&*}"
    hash="${hash#*file/d/}"
    hash="${hash%/*}"
    let COUNTER=COUNTER+1
    echo "Enlace "$COUNTER" id="$hash
    URL_TO_DOWNLOAD=$(wget --spider --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$hash -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id="$hash 2>&1 | grep *******Localización***********: | head -c-13 | cut -c16-)
    rm -rf /tmp/cookies.txt
    echo -e "$URL_TO_DOWNLOAD\r" >> /home/**********user***********/URLS_DECODED.txt
    echo "Enlace "$COUNTER" URL="$URL_TO_DOWNLOAD
done < /home/**********user***********/URLS.txt
Sk.
  • 460
  • 7
  • 15
1

JULY 2020 - Windows users batch file solution

I would like to add a simple batch file solution for windows users, as I found only linux solutions and it took me several days to learn all this stuff for creating a solution for windows. So to save this work from others that may need it, here it is.

Tools you need

  1. wget for windows (small 5KB exe program, no need installation) Download it from here. https://eternallybored.org/misc/wget/

  2. jrepl for windows (small 117KB batch file program, no need installation) This tool is similar to linux sed tool. Download it from here: https://www.dostips.com/forum/viewtopic.php?t=6044

Assuming

%filename% - the file name you want the the download will be saved to.
%fileid% = google file id (as already was explained here before)

Batch code for downloading small file from google drive

wget -O "%filename%" "https://docs.google.com/uc?export=download&id=%fileid%"        

Batch code for downloading large file from google drive

set cookieFile="cookie.txt"
set confirmFile="confirm.txt"
   
REM downlaod cooky and message with request for confirmation
wget --quiet --save-cookies "%cookieFile%" --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=%fileid%" -O "%confirmFile%"
   
REM extract confirmation key from message saved in confirm file and keep in variable resVar
jrepl ".*confirm=([0-9A-Za-z_]+).*" "$1" /F "%confirmFile%" /A /rtn resVar
   
REM when jrepl writes to variable, it adds carriage return (CR) (0x0D) and a line feed (LF) (0x0A), so remove these two last characters
set confirmKey=%resVar:~0,-2%
   
REM download the file using cookie and confirmation key
wget --load-cookies "%cookieFile%" -O "%filename%" "https://docs.google.com/uc?export=download&id=%fileid%&confirm=%confirmKey%"
   
REM clear temporary files 
del %cookieFile%
del %confirmFile%
audi02
  • 559
  • 1
  • 4
  • 16
1

Nov 2020

If you prefer using bash script, this worked for me: (5Gb file, publicly available)

#!/bin/bash
if [ $# != 2 ]; then
echo "Usage: googledown.sh ID save_name"
exit 0
fi
confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$1 -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')
echo $confirm
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$confirm&id=$1" -O $2 && rm -rf /tmp/cookies.txt
justadev
  • 1,168
  • 1
  • 17
  • 32
1

Get the file ID:

1.Go to your Google Drive in your browser.

2.Right-click the file you want to download and click Get shareable link. The link looks like this: https://drive.google.com/file/d/XXX/view?usp=sharing. Make note of the file ID XXX; you will be needing it below.

Get an OAuth token:

1.Go to OAuth 2.0 Playground

2.In the Select & authorize APIs box, scroll down, expand Drive API v3, and select https://www.googleapis.com/auth/drive.readonly.

3.Click Authorize APIs and then Exchange authorization code for tokens. Copy the Access token YYY; you will be needing it below.

Download the file from the command line:

If using OS X or Linux, open the "Terminal" program and enter the following command.

curl -H "Authorization: Bearer YYY" https://www.googleapis.com/drive/v3/files/XXX?alt=media -o ZZZ 

If using Windows, open the “PowerShell” program and enter the following command.

Invoke-RestMethod -Uri https://www.googleapis.com/drive/v3/files/XXX?alt=media -Method Get Headers @{"Authorization"="Bearer YYY"} -OutFile ZZZ

In your command, replace XXX with the file ID from above, YYY with the access token from above, and ZZZ with the file name that will be saved (for example, "myFile.zip" if you’re downloading an zip file).

thunder
  • 93
  • 6
1

You can install "lynx", and with the help of "lynx", you can download the file easily.

yum install lynx

replace ID_OF_FILE with your file's id

lynx https://drive.google.com/u/0/uc?id=ID_OF_FILE&export=download

Then select "download" or "download anyway"

that's it

0

skicka is a cli tool to upload,download access files from a google-drive.

example -

skicka download /Pictures/2014 ~/Pictures.copy/2014
10 / 10 [=====================================================] 100.00 % 
skicka: preparation time 1s, sync time 6s
skicka: updated 0 Drive files, 10 local files
skicka: 0 B read from disk, 16.18 MiB written to disk
skicka: 0 B uploaded (0 B/s), 16.18 MiB downloaded (2.33 MiB/s)
skicka: 50.23 MiB peak memory used
clemens
  • 16,716
  • 11
  • 50
  • 65
0

May 2018

If you want to use curl to download a file from Google Drive, in addition to the file id in drive you also need an OAuth2 access_token for Google Drive API. Getting the token involves several steps with the Google API framework. The sign up steps with Google are (currently) free.

An OAuth2 access_token potentially allows all kinds of activity, so be careful with it. Also, the token times out after a short while (1 hour?) but not short enough to prevent abuse if someone captures it.

Once you have an access_token and the fileid, this will work:

AUTH="Authorization: Bearer the_access_token_goes_here"
FILEID="fileid_goes_here"
URL=https://www.googleapis.com/drive/v3/files/$FILEID?alt=media
curl --header "$AUTH" $URL >myfile.ext

See also: Google Drive APIs -- REST -- Download Files

Paul
  • 26,170
  • 12
  • 85
  • 119
  • Is this true if the file or folder is shared with "anyone who has the link" ? – Tony Adams Aug 12 '18 at 14:47
  • 1
    @TonyAdams The provided link itself goes to a human-friendly preview page, it can not be provided to curl as-is to download the content. – Paul Aug 12 '18 at 21:26
0

You just need to use wget with:

 https://drive.google.com/uc?authuser=0&id=[your ID without brackets]&export=download

PD. The file must be public.

José Vallejo
  • 356
  • 4
  • 17
0

Get the shareable link and open it in incognito (very important). It will say that it cannot scan.

Open inspector and track network traffic. Click the button "Download anyway".

Copy the url of the last request made. This is your link. Use it in wget.

ktsour
  • 33
  • 5
0

I did this using a python script and google drive api, You can try out this snippet:

//using chunk download

file_id = 'someid'
request = drive_service.files().get_media(fileId=file_id)
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
    status, done = downloader.next_chunk()
    print "Download %d%%." % int(status.progress() * 100)
41 72 6c
  • 1,600
  • 5
  • 19
  • 30
aryan singh
  • 151
  • 11
0

Solution using only Google Drive API

Before running the code bellow, you must activate the Google Drive API, install dependencies and authenticate with your account. Instructions can be found on the original Google Drive API guide page

import io
import os
import pickle
import sys, argparse
from googleapiclient.discovery import build
from google.auth.transport.requests import Request
from googleapiclient.http import MediaIoBaseDownload
from google_auth_oauthlib.flow import InstalledAppFlow

# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/drive.readonly']


def _main(file_id, output):
    """ Shows basic usage of the Drive v3 API.
        Prints the names and ids of the first 10 files the user has access to.
    """
    if not file_id:
        sys.exit('\nMissing arguments. Correct usage:\ndrive_api_download.py --file_id <file_id> [--output output_name]\n')
    elif not output:
        output = "./" + file_id
    
    creds = None
    # The file token.pickle stores the user's access and refresh tokens, and is
    # created automatically when the authorization flow completes for the first
    # time.
    if os.path.exists('token.pickle'):
        with open('token.pickle', 'rb') as token:
            creds = pickle.load(token)
    # If there are no (valid) credentials available, let the user log in.
    if not creds or not creds.valid:
        if creds and creds.expired and creds.refresh_token:
            creds.refresh(Request())
        else:
            flow = InstalledAppFlow.from_client_secrets_file(
                'credentials.json', SCOPES)
            creds = flow.run_local_server(port=0)
        # Save the credentials for the next run
        with open('token.pickle', 'wb') as token:
            pickle.dump(creds, token)

    service = build('drive', 'v3', credentials=creds)

    # Downloads file
    request = service.files().get_media(fileId=file_id)
    fp = open(output, "wb")
    downloader = MediaIoBaseDownload(fp, request)
    done = False
    while done is False:
        status, done = downloader.next_chunk(num_retries=3)
        print("Download %d%%." % int(status.progress() * 100))

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--file_id')
    parser.add_argument('-o', '--output')
    args = parser.parse_args()
    
    _main(args.file_id, args.output)
ottovon
  • 333
  • 2
  • 10
0

I use this little script that gets only the URL copied from Google Drive:

#!/bin/bash

name=`curl $1 |  grep -w \"name\" | sed 's/.*"name" content="//' | 
sed 's/".*//'`
id=`echo $1 | sed 's#.*/d/##; s#/view.*##'`
curl -L https://drive.google.com/uc?id=$id > $name
# or
# wget -O $name https://drive.google.com/uc?id=$id
Rafi Moor
  • 66
  • 4
-1

no-scripting method to get a direct link

I know someone without bash-scripting experience is coming to this post from other site. This is a solution to do it within your browser.

Step 1: Generate a direct link normally with existing tools

Firstly, you use all other existing solutions to generate a direct link from your share link. You may use https://sites.google.com/site/gdocs2direct/, https://www.wonderplugin.com/online-tools/google-drive-direct-link-generator/ or https://chrome.google.com/webstore/detail/drive-direct-download/mpfdlhhpbhgghplbambikplcfpbjiail.
I will ignore this part.

The generated direct link looks like this: https://drive.google.com/u/0/uc?id=1Gjvcfj-8xxxxxxx8G8_jpgjcyorQ7BX5&export=download

The direct link works for most small files, but it is not working for large file. It will show a virus warning instead of simply downloading the file. Now let's solve this issue.

Step 2: Fix the broken direct link to workaround the virus warning

Open the broken "direct" link in your browser, you will see "Google Drive can't scan this file for viruses". Now right-click and view page source, you will see the following text:

<form id="downloadForm" action="https://drive.google.com/u/0/uc?id=1Gjvcfj-8xxxxxxx8G8_jpgjcyorQ7BX5&amp;export=download&amp;confirm=t&amp;uuid=5a0dd46b-521e-4ae7-8b41-0912e88b7782" method="post">

You have found the final link! Replace all &amp; to & and enjoy:

https://drive.google.com/uc?id=1Gjvcfj-8xxxxxxx8G8_jpgjcyorQ7BX5&export=download&confirm=t&uuid=c953a94e-b844-479f-8386-1ec83770fffb

Other solution for large file: Google Drive API

There is already a great answer for this solution!

recolic
  • 554
  • 4
  • 18
-1

You could get the url download link from google as .../file/d/FILEID/view?usp=share_link and extract the FILEID part. Then replace it in the following (it is in there twice).

wget --load-cookies /tmp/cookies.txt \
     "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID')" -O- \
    | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && \
    rm -rf /tmp/cookies.txt

Replace FILENAME with whatever the file is supposed to be called in the above line and enjoy.

Manoel Vilela
  • 844
  • 9
  • 17
-4

The easiest way to is put what ever you want to download in a folder. Share that folder and then grab the Folder ID from the URL Bar.

Then go to https://googledrive.com/host/[ID] (Replace the ID with your folder ID) You should see a list of all the files in that folder, click the one you want to download. A download should then visit your download page (Ctrl+J on Chrome), you then want to copy the download link then use wget "download link"

Enjoy :)