220

I'm facing this error when I try to clone a repository from GitLab (GitLab 6.6.2 4ef8369):

remote: Counting objects: 66352, done.
remote: Compressing objects: 100% (10417/10417), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The clone is then aborted. How can I avoid this?

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
Vy Do
  • 46,709
  • 59
  • 215
  • 313

27 Answers27

338

It happens more often than not, I am on a slow internet connection and I have to clone a decently huge git repository. The most common issue is that the connection closes and the whole clone is cancelled.

Cloning into 'large-repository'...
remote: Counting objects: 20248, done.
remote: Compressing objects: 100% (10204/10204), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining 
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

After a lot of trial and errors and a lot of “remote end hung up unexpectedly” I have a way that works for me. The idea is to do a shallow clone first and then update the repository with its history.

$ git clone http://github.com/large-repository --depth 1
$ cd large-repository
$ git fetch --unshallow
karel
  • 5,489
  • 46
  • 45
  • 50
Khader M A
  • 5,423
  • 3
  • 19
  • 19
  • 23
    This is the only answer that describes a workaround for the problem without switching to SSH. This worked for me, thanks! – garie Jul 10 '17 at 14:36
  • thx dude, this answer worked for me too. Just change my network with fast internet connection – wanz Jul 12 '17 at 05:15
  • 27
    The key here is `--depth 1` and `--unshallow`. This also works for fetching an existing repo on slow connection: `git fetch --depth 1` then `git fetch --unshallow`. – Andrew T. Feb 13 '18 at 09:10
  • 1
    For clarity @AndrewT., the `git fetch --unshallow` command deals with loss of connection in a more forgiving way than the `git clone`? And that's what makes the difference here? – Lowell Feb 09 '19 at 22:06
  • 5
    Now, the `git fetch --unshallow` command give `RPC failed;` error – ms_27 Jul 05 '19 at 12:25
  • why turns out failure since 100% cloned, anyone knows the reason? thanks – http8086 Oct 23 '19 at 15:41
  • 3
    Didn't work for me. Failed on the `git fetch --unshallow`. Guess my repo is too big even for this approach. Only SSH worked. – Jonathan Cabrera Feb 14 '20 at 22:48
  • 12
    If `git fetch --unshallow` still reports errors, you can use `git fetch --depth=100` and then `git fetch --depth=200` and then `git fetch --depth=300` and so on to fetch repo incrementally. This way works for Linux kernel repo, which is extremely large. – haolee Jun 02 '20 at 05:19
  • does `git fetch --unshallow` fetches all branches [as part of history info] ? I am still not seeing few branches. Am I missing something ? – Sitesh Jun 15 '20 at 16:19
  • as others mentioned, still can't get past `git clone` and my repo is just a jekyll site with some images. this answer is outdated, going with `ssh` and secure connections is the only real solution today. – cregox Jun 18 '20 at 15:39
  • Form me just using `git config --global http.postBuffer 524288000` as described at https://stackoverflow.com/a/38703069/2275206 did the trick – kghbln Mar 03 '22 at 23:06
  • I tried this, but then all the files from the repo (in my case, SolidWorks files) are only 1KB large and cannot be opened. Git still says that they are up-to-date. Anyone else have this issue? – Kjara May 02 '23 at 11:09
108

After few days, today I just resolved this problem. Generate ssh key, follow this article:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Declare it to

  1. Git provider (GitLab what I am using, GitHub).
  2. Add this to local identity.

Then clone by command:

git clone username@mydomain.com:my_group/my_repository.git

And no error happen.

The above problem

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

because have error when clone by HTTP protocol (curl command).

And, you should increment buffer size:

git config --global http.postBuffer 524288000
Vy Do
  • 46,709
  • 59
  • 215
  • 313
  • 11
    Change from HTTP to SSH work for me. Config `http.postBuffer` didn't work. – thangdc94 Oct 31 '17 at 09:03
  • if error is still there , you should edit your ssh config file vi /users/username/.ssh/config and add serverAliveInterval 120 and exit the vi using wq (to save and exit). This will actually prevent the server from timeout and connection break errors. – Tanvir Singh Dec 05 '17 at 12:56
  • that's nice, but anyone knows why that happens for 100% cloned? – http8086 Oct 23 '19 at 14:58
  • 4
    Changing ```http.postBuffer``` worked for me - thanks! – Negar Zamiri Mar 25 '20 at 21:25
  • after all great steps listed here, `git clone ssh://git@github.com//.git` is the only clone command that worked with me. and there's no need to reference the ssh is in the command at all (in case you're also wondering like i was). – cregox Jun 18 '20 at 20:57
  • 1
    worked for me too for pulling a large solution via a slow vpn connection – Ilias.P Jul 20 '20 at 11:02
  • 2
    Beware: I experienced several issues with `npm publish` when raising the postBuffer. When I set it down to 50000000, issues were gone. The default value is 1000000, by the way. – Martin Braun Oct 09 '20 at 23:02
  • 1
    changing http.postBuffer 524288000 worked for me.Thank you – Anju mohan Dec 30 '22 at 08:29
46

you need to turn off the compression:

git config --global core.compression 0

then you need to use shallow clone

git clone --depth=1 <url>

then most important step is to cd into your cloned project

cd <shallow cloned project dir>

now deopen the clone,step by step

git fetch --depth=N, with increasing N

eg.

git fetch --depth=4

then,

git fetch --depth=100

then,

git fetch --depth=500

you can choose how many steps you want by replacing this N,

and finally download all of the remaining revisions using,

git fetch --unshallow 

upvote if it helps you :)

NikhilP
  • 1,508
  • 14
  • 23
  • 2
    This is the only option that worked for me. On my case error was happening on: git clone --depth=1 However, as per your instruction, I've executed first: git config --global core.compression 0 Then all following steps, and everything worked great! PS: I have good internet connection, just today is behaving weirdly. Thank you! – Slipmp Jan 31 '21 at 21:23
  • 2
    Can you detail what does disabling compression help accomplish? – Slim Jul 02 '21 at 03:43
  • 4
    @Slim Here what we are doing is disabling the default behavior of compressing the full object and then fetch. instead we are fetching without compressing which allows us to fetch step by step by specifying the depth. – NikhilP Dec 27 '21 at 11:50
22

When I tried cloning from the remote, got the same issue repeatedly:

remote: Counting objects: 182, done.
remote: Compressing objects: 100% (149/149), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Finally this worked for me:

git clone https://username@bitbucket.org/repositoryName.git --depth 1
Vadim Kotov
  • 8,084
  • 8
  • 48
  • 62
  • 8
    what --depth 1 does – Wahdat Jan Mar 24 '20 at 13:25
  • If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository. – Rahman Rezaee Jul 28 '20 at 12:07
  • BUT i don't want to `clone`, I want to `push` . How can i do it with depth – Lars Feb 28 '21 at 08:36
12

Simple Solution: Rather then cloning via https, clone it via ssh.

For example:

git clone https://github.com/vaibhavjain2/xxx.git - Avoid
git clone git@github.com:vaibhavjain2/xxx.git - Correct
Vaibhav Jain
  • 1,939
  • 26
  • 36
9

Network connection problems.
Maybe due to the persistent connection timeout.
The best way is to change to another network.

Mike Yang
  • 2,581
  • 3
  • 24
  • 27
6

As above mentioned, first of all run your git command from bash adding the enhanced log directives in the beginning: GIT_TRACE=1 GIT_CURL_VERBOSE=1 git ...

e.g. GIT_CURL_VERBOSE=1 GIT_TRACE=1 git -c diff.mnemonicprefix=false -c core.quotepath=false fetch origin This will show you detailed error information.

Sergey Gindin
  • 61
  • 1
  • 2
6

These steps worked for me:using git:// instead of https://

saeedgnu
  • 4,110
  • 2
  • 31
  • 48
Jinwawa
  • 69
  • 1
  • 2
5

Usually it happen because of one of the below reasone:

  1. Slow Internet.
  • Switching to LAN cable with stable network connection helps in many cases. Avoid doing any parallel network intensive task while you are fetching.
  1. Small TCP/IP connection time out on Server side from where you are trying to fetch.
  • Not much you can do about. All you can do is request your System Admin or CI/CD Team responsible to increaseTCP/IP Timeout and wait.
  1. Heavy Load on Server.
  • Due to heavy server load during work hour downloading a large file can fail constantly.Leave your machine after starting download for night.
  1. Small HTTPS Buffer on Client machine.
  • Increasing buffer size for post and request might help but not guaranteed

git config --global http.postBuffer 524288000

git config --global http.maxRequestBuffer 524288000

git config --global core.compression 0

Hitesh Sahu
  • 41,955
  • 17
  • 205
  • 154
4

With me this problem occurred because the proxy configuration. I added the ip git server in the proxy exception. The git server was local, but the no_proxy environment variable was not set correctly.

I used this command to identify the problem:

#Linux:
export GIT_TRACE_PACKET=1
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1

#Windows
set GIT_TRACE_PACKET=1
set GIT_TRACE=1
set GIT_CURL_VERBOSE=1

In return there was the "Proxy-Authorization" as the git server was spot should not go through the proxy. But the real problem was the size of the files defined by the proxy rules

4

For me, the issue was that the connection closes before the whole clone complete. I used ethernet instead of wifi connection. Then it solves for me

Yuresh Karunanayake
  • 519
  • 1
  • 4
  • 10
4

This error seems to happen more commonly with a slow, or troubled internet connection. I have connected with good internet speed then it is worked perfectly.

Jitendra Rathor
  • 607
  • 8
  • 11
3

For me what worked is, as this error may occur for memory requirement of git. I have added these lines to my global git configuration file .gitconfig which is present in $USER_HOME i.e C:\Users\<USER_NAME>\.gitconfig

[core] 
packedGitLimit = 512m 
packedGitWindowSize = 512m 
[pack] 
deltaCacheSize = 2047m 
packSizeLimit = 2047m 
windowMemory = 2047m
Vipul Patil
  • 1,250
  • 15
  • 27
2

This problem arrive when you are proxy issue or slow network. You can go with the depth solution or

git fetch --all  or git clone 

    

If this give error of curl 56 Recv failure then download the file via zip or spicify the name of branch instead of --all

git fetch origin BranchName 
Gajender Singh
  • 1,285
  • 14
  • 13
1

Tried all of the answers on here. I was trying to add cocoapods onto my machine.

I didn't have an SSH key so thanks @Do Nhu Vy

https://stackoverflow.com/a/38703069/2481602

And finally used

git clone https://git.coding.net/CocoaPods/Specs.git ~/.cocoapods/repos/master

to finally fix the issue found https://stackoverflow.com/a/50959034/2481602

MindBlower3
  • 485
  • 4
  • 20
1

I am facing this problem also. resolve it. The problem is the slow internet connection. Please check your internet connection nothing else. I have connected with good internet speed then it is worked perfectly. hope it helped you.

Praveen Kumar Verma
  • 2,988
  • 2
  • 18
  • 31
0

This problem usually occurs while cloning large repos. If git clone http://github.com/large-repository --depth 1 does not work on windows cmd. Try running the command in windows powershell.

Vardaan
  • 25
  • 5
0

can be two reason

  1. Internet is slow (this was in my case)
  2. buffer size is less,in this case you can run command git config --global http.postBuffer 524288000
Ady
  • 67
  • 1
  • 4
0

This problem is solved 100%. I was facing this problem , my project manager change the repo name but i was using old repo name.

Engineer@-Engi64 /g/xampp/htdocs/hospitality
$ git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repo/cms
Cloning into 'cms'...
remote: Counting objects: 10647, done.
error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

How i solved this problem. Repo link was not valid so that's why i am facing this issue. Please check your repo link before cloning.

Kaleemullah
  • 446
  • 3
  • 8
0

I got the same issue while pushing some code to Github.

I tried git config --global http.postBuffer 524288000 but It didn't work for me.

Reason

It was because your commit history and/or any file(s) size is bigger.

My Case

In my case, package-lock.json was causing the problem. It was 1500+KB in size and 33K lines of code.

How I solved it?

  1. I commit and pushed everything without package-lock.json
  2. Copy the content of package-lock.json.
  3. Created a new file with the name of package-lock.json from the GitHub repo page.
  4. Paste the content of package-lock.json and commit.
  5. git pull on local.

And Done.

Tips

  • Maintain each commit size smaller
  • Push frequently
  • Use a good internet connection

I hope it helped you.

Shakil Alam
  • 308
  • 4
  • 9
0
git clone --global core.compression 0

then

git clone --depth=1 <https://your_repo.git>

then

git fetch --depth=2

then

git fetch --depth=10

... etc. until he writes

remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0

at the end you can write

git fetch --unshallow

and you will be thrown

fatal: --unshallow on a complete repository does not make sense

if at some stage you get an error again, try setting the --depth property to a smaller value and gradually increasing further

Andre228
  • 7
  • 3
0

I was able to clone the repo with GitHub Desktop

iretex
  • 53
  • 9
  • This does not provide an answer to the question. Once you have sufficient [reputation](https://stackoverflow.com/help/whats-reputation) you will be able to [comment on any post](https://stackoverflow.com/help/privileges/comment); instead, [provide answers that don't require clarification from the asker](https://meta.stackexchange.com/questions/214173/why-do-i-need-50-reputation-to-comment-what-can-i-do-instead). - [From Review](/review/late-answers/32444798) – Mad Physicist Aug 15 '22 at 20:12
0

I had this error when doing git push after changing to HTTP/1.1.

Solution: turn off my VPN and re-run git push.

codeananda
  • 939
  • 1
  • 10
  • 16
0

git config

[core]
    autocrlf = input
    compression = 0
[remote "origin"]
    proxy = 127.0.0.1:1086
[http]
    version = HTTP/1.1
[https]
    postBuffer = 524288000

retry.sh

set -x
while true
do
  git clone xxxxx
  if [ $? -eq 0 ]; then
    break
  fi
done
oaib
  • 11
  • 2
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Friedrich Mar 02 '23 at 16:48
-1

Changing git clone protocol to try.

for example, this error happened when "git clone https://xxxxxxxxxxxxxxx"

you can try with "git clone git://xxxxxxxxxxxxxx", maybe ok then.

Bingnan
  • 89
  • 1
  • 1
  • 4
-8

These steps are working for me:

cd [dir]
git init
git clone [your Repository Url]

I hope that works for you too.

Michel
  • 10,303
  • 17
  • 82
  • 179
-16

try this

$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup

this is work for me.. capture.png

  • 1
    The bug is sporadic due to an unreliable network. The solution presented here didn't actually fix the problem. The network just happened to be more reliable at the moment you tried cloning again. – John Pick Dec 03 '20 at 19:45