20

I'm wondering about what git is doing when it pushes up changes, and why it seems to occasionally push way more data than the changes I've made. I made some changes to two files that added around 100 lines of code - less than 2k of text, I'd imagine.

When I went to push that data up to origin, git turned that into over 47mb of data:

git push -u origin foo
Counting objects: 9195, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6624/6624), done.
Writing objects: 100% (9195/9195), 47.08 MiB | 1.15 MiB/s, done.
Total 9195 (delta 5411), reused 6059 (delta 2357)
remote: Analyzing objects... (9195/9195) (50599 ms)
remote: Storing packfile... done (5560 ms)
remote: Storing index... done (15597 ms)
To <<redacted>>
 * [new branch]      foo -> foo
Branch foo set up to track remote branch foo from origin.

When I diff my changes, (origin/master..HEAD) only the two files and one commit I did show up. Where did the 47mb of data come from?

I saw this: When I do "git push", what do the statistics mean? (Total, delta, etc.) and this: Predict how much data will be pushed in a git push but that didn't really tell me what's going on... Why would the pack / bundle be huge?

Community
  • 1
  • 1
user3330678
  • 201
  • 1
  • 4

2 Answers2

7

I just realized that there is very realistic scenario which can result in unusually big push.

What objects push does send? Which do not yet exist on server. Or, rather which it did not detect as existing. How does it check object existence? In the beginning of push, server sends references (branches and tags) which is has. So, for example, if they have following commits:

  CLIENT                                     SERVER
 (foo) -----------> aaaaa1
                      |
 (origin/master) -> aaaaa0                (master) -> aaaaa0
                      |                                 |
                     ...                               ...

Then client will get the something like /refs/heads/master aaaaa0, and find that it has to send only what is new in commit aaaaa1.

But, if somebody has pushed anything to remote master, it is different:

  CLIENT                                     SERVER
 (foo) -----------> aaaaa1                      (master) --> aaaaa2
                      |                                       /
 (origin/master) -> aaaaa0                                 aaaaa0
                      |                                      |
                     ...                                    ...

Here, client gets refs/heads/master aaaaa2, but it does not know anything about aaaaa2, so it cannot deduce that aaaaa0 exists on the server. So, in this simple case of only 2 branches the whole history will be sent instead of only incremental one.

This is unlikely to happen in grown up, being actively developed, project, which has tags and many branches some of which become stale and are not updated. So users might be sending a bit more, but it does not become that big difference as in your case, and goes unspotted. But in very small teams it can happen more often and the difference would be significant.

To avoid it, you could run git fetch before push. Then, in my example, the aaaaa2 commit would already exist at client and git push foo would know that it should not send aaaaa0 and older history.

Read here for the push implementation in protocol.

PS: the recent git commit graph feature might help with it, but I have not tried it.

max630
  • 8,762
  • 3
  • 30
  • 55
  • "In the beginning of push, server sends references (branches and tags) which is has" Can you provide some link which documents Git works this way? – Suma Mar 07 '16 at 09:54
  • 2
    [this](https://github.com/git/git/blob/master/Documentation/technical/pack-protocol.txt) at least clearly states what you have quoted. I need some time to read the rest. – max630 Mar 07 '16 at 13:57
  • Checked the doc - seems correct. Was confused because it contains fetch section also. – max630 Mar 07 '16 at 20:47
3

When I went to push that data up to origin, git turned that into over 47mb of data..

Looks like your repository contains a lot of binaries data.


First let's see what git push does?

git-push - Update remote refs along with associated objects


What are those associated objects?

After each commit you do git perform a pack of your data into files named XX.pack && `XX.idx'


A good reading about the packing is here

enter image description here

How does git pack files?

The packed archive format .pack is designed to be self-contained so that it can be unpacked without any further information.
Therefore, each object that a delta depends upon must be present within the pack.

A pack index file .idx is generated for fast, random access to the objects in the pack.

Placing both the index file .idx and the packed archive .pack in the pack subdirectory of $GIT_OBJECT_DIRECTORY (or any of the directories on $GIT_ALTERNATE_OBJECT_DIRECTORIES) enables Git to read from the pack archive.

When git pack your files it does it in a smart way so it will be very fast to extract data.

In order to achieve this git use pack-heuristics which is basically looking for similar part of content in your pack and storing them as single one, meaning - if you have the same header (License agreement for example) in many files, git will "find" it and will store it once.

Now all the files which include this license will contain pointer to the header code. In this case git doesn't have to store the same code over and over so the pack size is minimal.

This is one of the reasons why it's not a good idea and not recommended to store binary files in git since the chance of having similarity is very low so the pack size will not be optimal.

Git store your data in a zipped format to reduce space so again binary will not be optimal as well whcn zipped (size wize).


Here is a sample of the git blob using the zipped compression:

enter image description here

Community
  • 1
  • 1
CodeWizard
  • 128,036
  • 21
  • 144
  • 167