11

In our Continuous Integration environment we do very heavy usage of git clone and git update.

Sometimes the network or the central git server is not reliable.

Is there a way to tell git to retry if the http request failed?

Example:

500 Internal Server Error while accessing https://example.com/repos/foo_bar/info/refs?service=git-upload-pack

guettli
  • 25,042
  • 81
  • 346
  • 663
  • 1
    The Git command should return a proper [exit code](https://en.wikipedia.org/wiki/Exit_status) which you can check for success of the command. If it didn’t succeed, just retry N times before you abort. – poke Jan 26 '16 at 12:40

5 Answers5

9

You could run a script like this instead of calling git directly.

#!/bin/bash

REALGIT=/usr/bin/git

RETRIES=3
DELAY=10
COUNT=1
while [ $COUNT -lt $RETRIES ]; do
  $REALGIT $*
  if [ $? -eq 0 ]; then
    RETRIES=0
    break
  fi
  let COUNT=$COUNT+1
  sleep $DELAY
done
Vorsprung
  • 32,923
  • 5
  • 39
  • 63
  • That’s my comment in code ;) – Exactly how I imagined it, nice! – poke Jan 26 '16 at 12:51
  • I guess above solution would retry for `git grep ...`, too. – guettli Jan 26 '16 at 15:12
  • 1
    You should replace `$REALGIT $*` with `$REALGIT "$@"` according to https://stackoverflow.com/questions/12314451/accessing-bash-command-line-args-vs – JGK Apr 14 '20 at 07:14
8

Is there a way to tell git to retry if the http request failed?

No git itself, it does not support natively that feature.

But interestingly enough, the idea of wrapping a git command in order to retry its execution has been done before: see "git-retry(1) Manual Page ", part of depot_tools, a collection of tools for dealing with Chromium development.

The shell wrapper git-retry calls the python script git_retry.py with the following options:

'-v', '--verbose', default=0,

Increase verbosity; can be specified multiple times

'-c', '--retry-count', default=GitRetry.DEFAULT_RETRY_COUNT (5),

Number of times to retry (default=5)

'-d', '--delay', default=GitRetry.DEFAULT_DELAY_SECS (3.0)

Specifies the amount of time (in seconds) to wait between successive retries (default=3 sec.). This can be zero.

'-D', '--delay-factor', default=2

The exponential factor to apply to delays in between successive failures (default=%default). If this is zero, delays will increase linearly. Set this to one to have a constant (non-increasing) delay.


Note: a git clone for a repo with submodule will always try to clone a submodule twice (one retry). See "Is there any way to continue Git clone from the point where it failed?".

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • This solution doesn't do anything that a simple shell script does but adds a dependency on an external library – Vorsprung Feb 11 '16 at 17:22
5

This is a wrapper around git commands, it recognizes when a command fails and retries

git retry [-v] [-c COUNT] [-d DELAY] [-e] — <git_subcommand>

More information on this can be found here.

Phil
  • 597
  • 11
  • 26
4

Run below command

while ! git push; do sleep 7; done

SteveScm
  • 545
  • 1
  • 7
  • 15
1

Stumbled upon this as well because I wanted to make some of my scripts a bit more resilient toward network issues. I went for the bash retries as well and tried to make it as concise as possible.

I just wanted to leave my version here in case it helps anyone:

set -Eeuo pipefail

RETRIES_NO=5
RETRY_DELAY=3
for i in $(seq 1 $RETRIES_NO); do
  git my-maybe-failing-command && break
  sleep ${RETRY_DELAY}
  [[ $i -eq $RETRIES_NO ]] && echo "Failed to execute git cmd after $RETRIES_NO retries" && exit 1
done
echo "success"

It would also be very easy to write this as a function that you call with the command, retry-number, and delay to retry any kind of bash command that you want.

dom
  • 111
  • 5