We have an application which stores files and provides a "disk filling level" to the user.
To achieve that we use the disk_total_space
and disk_free_space
combo, works great on the dev machine (mac os, local hdd around 250Gb).
On the production machine which use a 20Tb SAN bay storage the results are (way) off :
df -h
output :
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG1-lv_foo01 20T 141M 19T 1% /foo01
df -hi
output (inodes) :
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG1-lv_foo01 320M 30 320M 1% /foo01
disk_total_space("/foo01")
returns 21902586179584
which is roughly 19.92Tb, looks about right.
But disk_free_space("/foo01")
returns 20802911117312
which is roughly 18.92Tb, that would mean about 1Tb is being used on disk but in reality only about 140Mb is really used !
From where could that difference come from ? Inodes should not amount for a lot of space since only a few are used ...
I'm pretty confused, any idea ?