For reference, here's a more concise version:
def sha1OfFile(filepath):
import hashlib
with open(filepath, 'rb') as f:
return hashlib.sha1(f.read()).hexdigest()
On second thought: although I've never seen it, I think there's potential for f.read()
to return less than the full file, or for a many-gigabyte file, for f.read() to run out of memory. For everyone's edification, let's consider how to fix that: A first fix to that is:
def sha1OfFile(filepath):
import hashlib
sha = hashlib.sha1()
with open(filepath, 'rb') as f:
for line in f:
sha.update(line)
return sha.hexdigest()
However, there's no guarantee that '\n'
appears in the file at all, so the fact that the for
loop will give us blocks of the file that end in '\n'
could give us the same problem we had originally. Sadly, I don't see any similarly Pythonic way to iterate over blocks of the file as large as possible, which, I think, means we are stuck with a while True: ... break
loop and with a magic number for the block size:
def sha1OfFile(filepath):
import hashlib
sha = hashlib.sha1()
with open(filepath, 'rb') as f:
while True:
block = f.read(2**10) # Magic number: one-megabyte blocks.
if not block: break
sha.update(block)
return sha.hexdigest()
Of course, who's to say we can store one-megabyte strings. We probably can, but what if we are on a tiny embedded computer?
I wish I could think of a cleaner way that is guaranteed to not run out of memory on enormous files and that doesn't have magic numbers and that performs as well as the original simple Pythonic solution.