3

I am using gitlab host on my server. On the server, one repository's pack file size is 500MB, but when I clone it to local, I find the local pack file size is 250MB. Is this a problem?

Sanster
  • 1,068
  • 1
  • 9
  • 12
  • I run `git gc` on the server and the pack become 250MB. But why the pack file on remote will become so big? The answer in [this question](http://stackoverflow.com/questions/3162786/how-can-i-trigger-garbage-collection-on-a-git-remote-repository) mention that `a remote repo shouldn't need all that much garbage collection....` – Sanster Jan 30 '16 at 07:23

2 Answers2

1

It's not necessarily a problem. You can use git cat-file as shown here to compare the specifics of your trees.

mbb
  • 3,052
  • 1
  • 27
  • 28
1

One the server, one repository's pack file size is 500MB, but when I clone it to local, I find the local pack file size is 250MB. Is this a problem?

The size of the pack can be different on the remote server and on your local server.
Its not a problem at all but if you have large pack file your repository performance will be affected.

Every time git commit code it should be added to the pack as well. Git store the data in 2 main files. .pack && .idx. The pack file is the content of your repository while the index is mapping between content to file names.

If you will look on under .git/objects/pack you might see several pack files.
In this case you should "tell" git to optimize the repository with the :

git gc --aggressive --prune=now

This command will re-pack the content of the repository and with the option of the --prune=now it will "super" optimize your repository size.

--prune=<date>

Prune loose objects older than date (default is 2 weeks ago, overridable by the config variable gc.pruneExpire). --prune=all prunes loose objects regardless of their age. --prune is on by default.

Community
  • 1
  • 1
CodeWizard
  • 128,036
  • 21
  • 144
  • 167