Most version control systems are optimised for "small-ish text files". Storing a 100MB file in any VCS will take up at least 100MB of file system somewhere (assuming it can't easily be compressed). If you store 3 completely different versions, that's 300MB of storage somewhere.
The difference with distributed version control systems, such as git, is that they include the full history in every working copy. That means every version of every file takes up space in every working copy, forever, even if the file is deleted in a later revision. (On a centralised VCS, this space would only be spent on the central server.)
There is however a bright side: git is quite smart about how it stores things, at two levels of abstraction:
- At one level, git is a "content-addressed database": it stores "blobs" based on a hash of their content. That means that a file needs a new blob only when its content changes; and in fact, only when that content has never occurred before in the entire history of the repository.
- At the next level down, even that "blob" may not be stored in full on the file system, because a packfile may include it as a delta (a set of changes) from a similar blob.
That leads to a few considerations of when LFS, or some other out-of-repo solution, might be useful:
- How big is the file? If it's a few megabytes, that may be enough on its own to not include it in history.
- How frequently does it change? A 100KB file re-generated with random content on every commit would add a megabyte to every working copy for every 10 commits.
- Is every version actually different? If you have two different versions of a logo, and you keep changing your mind which to use, it won't take up any extra space as long as you use exactly the same two files. The same applies to renaming a file: if the content is unchanged, no extra "blob" is needed.
- How different are the versions, from a binary point of view? If you keep appending to a very long log file, git will probably notice and store it in a relatively efficient packfile.