Let's say one wants to create a 1TB git repository in which data is updated here and there regularly. Whether that's a good idea is not part of the question, it probably isn't, so let's define it as a prerequisit. It's 1TB of ascii data in the working directory, but how it's chunked into files (a lot of tiny ones, fewer larger ones etc) can be chosen arbitrarily.
Git with large files gives some good info on that, in particular that for large files xdelta
seems to become the bottleneck, while for a huge number of files git gc
seems to become the problem (though that answer is from 2013, so it may well be outdated).
Git LFS or VFS for Git is not to be employed, the data should be revisioned and contained in the repository.
I'm definitely going to run some tests, but my question is whether anyone has experience values with this and could recommend (or make an educated guess) in what range an optimal filesize per chunk might be found?