Is there any efficient way of handling this with Git?
No.
The hash ID of any Git object is a cryptographic checksum of its contents. You could speed up the computation a bit by having saved checksums for the first N megabytes, for instance, so that if you change some bytes 50 MB into the 100 MB object, you can compute the new blob object checksum by starting with the known 50 MB checksum and hence computing only about half as much of a checksum. But you'll still need to either store the entire loose object or implement your own pack-file algorithm as well.
Git is much better at handling a larger number of smaller files. For instance, instead of 1 100-MB file, you could store 1000 100-kB files. If you need to modify some bytes in the middle, you're then changing only a single file, or at most two files, each of which is smaller and will become a smaller loose object that can be summed relatively quickly.