18

Here is the problem:

I created bare git repository at my hosting partner place, which I use as the reference repository from all the locations/computers I maintain my project from.

The thing is that my project is using a sqlite db file, which keeps growing regularly (it is about 150MB for now). As time is passing, my .git folder is getting bigger and bigger (lately around 1GB). And my hosting space is limited.

I need the bare repository to contain the HEAD version of this db file but I really do not need to keep its version history.

So, to gain some space, from time to time, I remove the db file from the history, clean the repository and recreate the bare version. This works, but is quite a pain.

Is there a way to tell git to keep only the last version of a file and drop its history?

Benoît Vidis
  • 3,908
  • 2
  • 23
  • 24
  • 1
    related question: http://stackoverflow.com/questions/540535/managing-large-binary-files-with-git – jfs Feb 12 '10 at 13:18
  • this might not be a direct solution but why not keep the data base file untracked and make a script that synchronize your file with the file in the main repository ? – Ahmed Kotb Feb 12 '10 at 13:19
  • Why do you need this db file, to keep a copy of the schema, or the data? Or both? – Ben James Feb 12 '10 at 13:21

5 Answers5

5

Short answer: no.

More useful answer: Git doesn't track files individually, so asking it to throw away the history of a single file would mean that it would have to rewrite all of its history completely upon every commit, and that leads to all kinds of ugly problems.

You can store a file in an annotated tag, but that's not very convenient. It basically goes like this:

ID=`git hash-object -w yourfile.sqlite`
git tag -a -m "Tag database file" mytag $ID

In no way does that conveniently update (or even create) the database file in the working tree for you... you'd have to use hook scripts to emulate that.

Full disclosure: I'm not completely sure whether it's actually possible to push tagged blobs that aren't covered by the normal history. I suspect that it isn't, in which case this recipe would be a lot less than useful.

Jan Krüger
  • 17,870
  • 3
  • 59
  • 51
4

It sounds like you're looking for the solution to the wrong problem.

Large binary files do often need to be stored in repositories, but I don't think a SQLite database is something you would really need to store in its binary form in a repository.

Rather, you should keep the schema in version control, and if you need to keep data too, serialize it (to XML, JSON, YAML...) and version that too. A build script can create the database and unserialize the data into it when necessary.

Because a text-based serialization format can be tracked efficiently by Git, you won't worry about the space overhead of keeping past versions even if you don't think you need access to them.

Ben James
  • 121,135
  • 26
  • 193
  • 155
  • doing so would allow git to apply its usual compression and diffing techniques making this much less painful. The only thing to take care of would be creating a properly sorted serialization format that would minimize the size of the diff. – David Schmitt Feb 12 '10 at 13:31
  • I do not agree. If you look at the slite format, it is not that binary. Git is perfectly able to generate some usable diffs with it. The only benefit would be that diffs would be easier to read in case of conflict. Having to handle a text serialization layer is far too much work if you ask me – Benoît Vidis Feb 12 '10 at 13:50
  • This is a cool idea... is there a favorite script you have for doing text-based serialization? – AlexMA May 04 '12 at 23:22
3

You can always use .gitignore config file for this - from the beginning.

And ... (from this thread: kudos for Björn Steinbrink!)

Use filter-branch to drop the parents on the first commit you want to keep, and then drop the old cruft.

Let's say $drop is the hash of the latest commit you want to drop. To keep things sane and simple, make sure the first commit you want to keep, ie. the child of $drop, is not a merge commit. Then you can use:

git filter-branch --parent-filter "sed -e 's/-p $drop//'" \ 
    --tag-name-filter cat -- \ 
    --all ^$drop 

The above rewrites the parents of all commits that come "after" $drop.

Check the results with gitk.

Then, to clean out all the old cruft.

First, the backup references from filter-branch:

git for-each-ref --format='%(refname)'refs/original | \ 
    while read ref 
    do 
            git update-ref -d "$ref" 
    done 

Then clean your reflogs:

git reflog expire --expire=0 --all 

And finally, repack and drop all the old unreachable objects: git repack -ad git prune # For objects that repack -ad might have left around

At that point, everything leading up to and including $drop should be gone.

David
  • 9,635
  • 5
  • 62
  • 68
Zsolt Botykai
  • 50,406
  • 14
  • 85
  • 110
0

If I understand your question, I think I have a simple solution.

  1. First backup the file somewhere,
  2. Delete it from your working dir/tree. Not git rm, just rm.
  3. Do a commit.
  4. Make sure the file is added to .gitignore.

On subsequent commits, GIT will no longer attempt to add that file. Note that you will still have the file stored in previous commits. It's just that you won't be adding it to every commit you do in the future. In order to delete it from prior commits, you'll need advice from someone with more GIT experience than I have.

0

Add sqlite.db to your .gitignore.

To check-in the current db for (potential) pushing with the current branch:

branch="$(sed 's,.*refs/heads/,,' "$(git rev-parse --git-dir)"/HEAD)"
objectname=$(git hash_object -w "$(git rev-parse --show-toplevel)/sqlite.db")
git tag -f db_heads/$branch $objectname

when pushing a branch:

git push origin $branch +db_heads/$branch

When fetching a branch:

git fetch origin $branch tags/db_heads/$branch:tags/db_heads/$branch

when checking out a branch:

git checkout $branch
git cat-file -p db_heads/$branch >"$(git rev-parse --show_toplevel)/sqlite.db"

And that should do it, I think.

jthill
  • 55,082
  • 5
  • 77
  • 137