12

I'm getting an error like :-

Cloning into 'large-repository'...
remote: Counting objects: 20248, done.
remote: Compressing objects: 100% (10204/10204), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining 
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
Ram
  • 391
  • 1
  • 3
  • 16
  • 2
    How you are trying to clone it? – Nikola Andreev Jan 11 '18 at 05:42
  • I'm trying this with git clone command :- $ git clone https://ramweexcel@bitbucket.org/weexcel1/higher-education-haryana.git – Ram Jan 11 '18 at 05:46
  • 3
    Possible duplicate of [error: RPC failed; curl transfer closed with outstanding read data remaining](https://stackoverflow.com/questions/38618885/error-rpc-failed-curl-transfer-closed-with-outstanding-read-data-remaining) – omurbek Jan 11 '18 at 05:49
  • Possible duplicate of [How do I download a large Git Repository?](https://stackoverflow.com/questions/34389446/how-do-i-download-a-large-git-repository) – Nikola Andreev Jan 11 '18 at 05:52
  • Thanks Nikola. I did the same steps. Clone on progress... – Ram Jan 11 '18 at 05:54
  • You had selected the right answer before! – VonC Jan 16 '18 at 05:22

5 Answers5

15
git config --global http.postBuffer 524288000

git clone repo_url --depth 1

I have followed above steps and finally I have successfully cloned my code.

Melebius
  • 6,183
  • 4
  • 39
  • 52
Ram
  • 391
  • 1
  • 3
  • 16
  • I have edited my answer with that missing element for you to select it back. – VonC Jan 16 '18 at 07:15
  • 1
    Turning compression off and increasing the `http.postBuffer` size has worked for me, even though I clone the repo through SSH. Are you guys aware of that? Does Git use the `http.postBuffer` value even when cloning with SSH? A bit strange but who knows, for me it worked :) – tonix Oct 21 '20 at 08:01
8

That looks like a curl error, typical of a slow internet connection which closes too soon.

As seen here, try a shallow clone (or switch to ssh)

git clone https://ramweexcel@bitbucket.org/weexcel1/higher-education-haryana.g‌​it --depth 1

Even then, as I documented in 2011, you might need to raise the http.postBuffer

git config --global http.postBuffer 524288000

But the idea remains: starting with one commit depth can help.

From there, you can gradually increase the depth:

git fetch --depth=<number-of-commits>

And, after a few iteration:

git fetch --unshallow
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • Yes. I have seen it and did the same steps. Lets see what happen. Clone on progress... – Ram Jan 11 '18 at 05:50
  • @Ram The idea is to *not* download the full history, but only the last commit. And try to extend for there. – VonC Jan 11 '18 at 05:51
6

First, try to download a smaller amount, so that when the network fails, you don't have to start from zero:
Taken From This Answer by ingyhere

First, turn off compression:

git config --global core.compression 0

Next, let's do a partial clone to truncate the amount of info coming down:

git clone --depth 1 <repo_URI>

When that works, go into the new directory and retrieve the rest of the clone:

git fetch --unshallow 

or, alternately,

git fetch --depth=2147483647

Now, do a regular pull:

git pull --all

I think there is a glitch with msysgit in the 1.8.x versions that exacerbates these symptoms, so another option is to try with an earlier version of git (<= 1.8.3, I think).

If this does not help, because your network is still too unstable or your repo still too large, try a different network - best would be a wired.

For me, that was not an option. VonC's Answer states to do git config --global http.postBuffer 524288000. Maybe you'll need to do git config --global https.postBuffer 524288000 instead, if you're using https.

Finally, what worked for me in the end:
Give up and use a different machine
If it works on your laptop, just pull that repo onto your laptop, then run

git bundle create /my/thumb/drive/myrepo.bundle --all  

And restore it on your other machine with

git clone /my/thumb/drive/myrepo.bundle
lucidbrot
  • 5,378
  • 3
  • 39
  • 68
0

Got the same error message.

The real cause was a disk full, the repo being cloned is around 20 gigs.

Clone with depth=1 worked, and then fetched depth by small increments. After a while I noticed disk was almost full.

Made up free space on the disk, deleted the partial repo and the "git clone" with full depth worked. Unfortunately the error message (The remote end hung up unexpectedly) was misleading.

Yves Forget
  • 101
  • 3
  • 10
0

In most cases, it will get occur because of slow internet. Try to clone in chank.

What should you do then?
Step-1: git clone --depth=1

Step-2: git fetch --depth=x (Here x will be an integer value which refers number-of-commits, do not give a value more than 200+ in incremental order if the internet is slow)

run this above command multiple times.

finally: git fetch --unshallow

unshallow: Git provides a fetch --unshallow command which solves the problem, so we just need to run git fetch --unshallow in the repository before running r10k. However, some of our (older) GitLab installs don't make shallow clones. Instead, they make full clones with a single detached branch, so we need to fetch --all instead

Ramkumar D
  • 9,226
  • 2
  • 17
  • 18