0

During the work from home days, my bandwidth is falling short to clone a repo. I've tried around 10 times to clone the repo, no luck.

Receiving objects:  91% (54330/59387), 445.67 MiB | 44.00 KiB/s
Receiving objects:  91% (54506/59387), 445.80 MiB | 46.00 KiB/s
Receiving objects:  91% (54635/59387), 445.86 MiB | 45.00 KiB/s
Receiving objects:  92% (54637/59387), 445.86 MiB | 45.00 KiB/s
Receiving objects:  92% (54721/59387), 445.92 MiB | 38.00 KiB/s
Receiving objects:  92% (54782/59387), 445.99 MiB | 43.00 KiB/s
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Is there any way I could receive the remaining ~8% of the objects? Let me know if there are any attributes of git clone that could help me with this. I am okay even if there is a way in Visual Studio or Visual Studio Code to achieve the same.

I have seen "How to complete a git clone for a big project on an unstable connection?". It explains how to do if you are about to begin from the start again. What I am looking for is how to move ahead without restarting the clone.

Shridhar R Kulkarni
  • 6,653
  • 3
  • 37
  • 57
  • 1
    About your last sentence : if the clone fails, `git` (unfortunately in your case) cleans up whatever it had locally, so there is no "resuming" (not using `git` only at least). Are you in a situation where you have a part of the repo's history locally ? – LeGEC Oct 28 '20 at 13:12
  • @LeGEC: Agreed. No I don't have partial history local . – Shridhar R Kulkarni Oct 28 '20 at 14:45
  • The suggestion is : try to download a smaller starting payload (using `git clone --depth 1`), and increase little by little, until a complete fetch passes. – LeGEC Oct 28 '20 at 16:56

1 Answers1

3

Try to start with a shallow clone :

git clone --depth 1 <repo>

You may then try to unshallow the repo :

git fetch --unshallow origin

If it fails, you can try to first increase the depth iteratively :

git fetch --depth 10 origin master
git fetch --depth 20 origin master
...

and possibly fetch branches one by one :

git fetch --depth 1 origin branch1
git fetch --depth 10 origin branch1
...

until git fetch --unshallow passes.


Another way is to just copy the files from any existing clone.

Say you have an ssh access to a server which doesn't have bandwidth problems :

ssh <server-with-bandwidth>
[remote]$ git clone <repo>

You then just need to copy the content of remote:repo/ (including its .git/ directory, which contains all the cloned history) over to your own PC.

You can use scp, sftp, rsync ... or any tool which allow resuming on failure.

Obviously, follow your company's security guidelines : choose a remote server on which it is valid to clone the code, remove the useless clone after it's been used, etc ...

LeGEC
  • 46,477
  • 5
  • 57
  • 104