One thing everyone seems to agree on is Git is not great for big binary blobs. Keep in mind that a binary blob is different from a large text file; you can use Git on large text files without a problem, but Git can't do much with an impervious binary file except treat it as one big solid black box and commit it as-is.
According to https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7:
One thing everyone seems to agree on is Git is not great for big binary blobs. Keep in mind that a binary blob is different from a large text file; you can use Git on large text files without a problem, but Git can't do much with an impervious binary file except treat it as one big solid black box and commit it as-is.
Say you have a complex 3D model for the exciting new first person puzzle game you're making, and you save it in a binary format, resulting in a 1 gigabyte file. You git commit it once, adding a gigabyte to your repository's history. Later, you give the model a different hair style and commit your update; Git can't tell the hair apart from the head or the rest of the model, so you've just committed another gigabyte. Then you change the model's eye color and commit that small change: another gigabyte. That is three gigabytes for one model with a few minor changes made on a whim. Scale that across all the assets in a game, and you have a serious problem.
It was my understanding that there is no difference between text and binary files and Git stores all files of each commit in their entirety (creating a checksummed blob), with unchanged files simply pointing to an already existing blob. How all those blobs are stored and compressed is another question, that I do not know the details of, but I would have assumed that if the various 1GB files in the quote are more or less the same, a good compression algorithm would figure this out and may be able to store all of them in even less than 1GB total, if they are repetitive. This reasoning should apply to binary as well as to text files.
Contrary to this, the quote continues saying
Contrast that to a text file like the .obj format. One commit stores everything, just as with the other model, but an .obj file is a series of lines of plain text describing the vertices of a model. If you modify the model and save it back out to .obj, Git can read the two files line by line, create a diff of the changes, and process a fairly small commit. The more refined the model becomes, the smaller the commits get, and it's a standard Git use case. It is a big file, but it uses a kind of overlay or sparse storage method to build a complete picture of the current state of your data.
Is my understanding correct? Is the quote incorrect?