The short answer to the question is just "no", but this short answer misses out on something important, which is this: files aren't in branches. Files are in commits.
At first, this might seem like a pointless distinction—and sometimes it is. But not always. So: when does it matter? The answer is: whenever a commit is in something other than exactly one branch. A commit can be in two or more branches, or in no branches at all.
The use case for this is to know why git fetch is downloading a large amount of data (if somebody has pushed a large file to their branch, I'd like to know which branch.)
A commit holding a large file might be in no branch. If it's in more than one branch, perhaps you can just assign it to all its containing branches.
Future versions of Git will give you the option of deferring the downloading of particular objects. This means you could run a git fetch
that obtains, but only virtually obtains, a commit holding a multi-gigabyte file. Over time, this large file might wind up in all branches, even though you still haven't actually fetched it. If you are not worried about these future possibilities, we can finally get down to the more interesting question: how do you want to decide that a file is "too big", and how and when will you find such files?
The git fetch
command itself won't do any of this for you. Its job is merely to call up some other Git repository, ask that other Git repository about its branches, tags, and other names and the commits and other internal Git objects that go with those, and then download none, some, or all of those objects to store them in your repository, and update some set of name(s) in your repository that will find those objects. The fetching process itself doesn't care about the sizes of those objects;1 it just gets them. It doesn't care which branches, if any, contain any commits that contain any of those objects, either.
Before I link to the other question here, I'll add one more point: a large file can be stored in a small space, sometimes, in Git. For instance, a multi-gigabyte file consisting solely of one repeated byte will compress very well. This file won't take long to fetch, and won't take much space to store in a commit, but will occupy a lot of space when checked out. It's always worth thinking about the difference between compressed sizes and uncompressed sizes.
So, perhaps just after you have run a git fetch
—whether or not it took a long time or printed a large total size or whatever—you can, whenever you like, check for large objects. There is already a question about this: see How to find/identify large commits in git history? You might want to jump directly to raphinesse's answer.
1Well, it doesn't care yet; note the possibility of skipping objects and the obvious idea of skipping large ones.