0

I have a very large repo (> 1GB because of binary assets), the gitlab server is installed correctly (the doctor report all ok), I have created a little git repo which work OK, push pull and so on.

Username for 'http://x.y.z': tyoc213 Password for 'http://tyoc213@x.y.z': Counting objects: 4894, done. Compressing objects: 100% (4872/4872), done. error: RPC failed; result=55, HTTP code = 0 fatal: The remote end hung up unexpectedly Writing objects: 100% (4894/4894), 506.89 MiB | 12.27 MiB/s, done. Total 4894 (delta 2104), reused 0 (delta 0) fatal: The remote end hung up unexpectedly Everything up-to-date

I have tryied to change git config http.postBuffer 5000 or even bigger.

I even have tried to push it from inside the server (I mean I have already the git repo inside the server) to gitlab, but the same error happen.

Is there a fix for this? what should I try? how can it be "everything up to date"?

tyoc213
  • 1,223
  • 18
  • 21

4 Answers4

2

Check your config/unicorn.rb config.

It includes:

# nuke workers after 30 seconds instead of 60 seconds (the default)
#
# NOTICE: git push over http depends on this value.
# If you want be able to push huge amount of data to git repository over http
# you will have to increase this value too.
#
# Example of output if you try to push 1GB repo to GitLab over http.
#   -> git push http://gitlab.... master
#
#   error: RPC failed; result=18, HTTP code = 200
#   fatal: The remote end hung up unexpectedly
#   fatal: The remote end hung up unexpectedly

This isn't the exact same error, but the issue might be related.
Check your unicorn logs.

For "RPC failed; result=55", see my old answer, and try increasing the log level (GIT_CURL_VERBOSE=1 git push, see this example) and/or switch to ssh for testing if the issue persists.

If you don't see anything in the unicor logs, it means Gitlab is never reached.
If you have NGiNX in front of it, the common issue is its config for allowing large files. Look for client_max_body_size xxM;, and increase that value.

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
2

I just ran into the similar issue, even git config was not able to resolve it. You would want to change to use ssh instead.

If you are using sourcetree you can do the following.

  1. create an ssh key and import it into your gitlab account https://gitlab.com/help/ssh/README

  2. in SourceTree > Tools > Add SSH Keys > chose your ssh that was created locally (id_rsa.pub)

  3. Go to the Repo you were working on > Setting (right top corner)

  4. Under Remotes Tab > Add > input the name and your ssh link (git@gitlab.com:xxxxxxxxxx/xxxxxxxxx.git)

  5. once added a new remotes repo will show up, then right click and fetch the repo

  6. now you can push to the repo using the ssh

afterward you can remove the https remote repo

1

Another option when working with a very large repository is to push it in stages. There is an Amazon AWS CodeCommit example script which creates tags (batching the repository into every X # of commits), then pushes the commits between tags.

Located at: http://docs.aws.amazon.com/codecommit/latest/userguide/how-to-push-large-repositories.html#how-to-push-large-repositories-sample

There are some other options from this url as well, including using git gc --aggressive to reduce the size of the repository before pushing:

Git, fatal: The remote end hung up unexpectedly

Halcyon
  • 1,376
  • 1
  • 15
  • 22
0

I had the same problem while uploading a 4 gb repository. I wrote this bat script to push the repo in stages by file type. This made push a lot faster.

For Linux

git add **/*.$1
git commit -m "$1"
git push

For Windows

set ext=%1
git add **/*.%ext%
git commit -m "%ext%"
git push

To uppload all .js files I'd use

>>> this.bat js