0

Using Python, how can we monitor a specific file for reads done by another process?

Specifically asking for Ubuntu, but cross-platform solutions will be ideal.

martineau
  • 119,623
  • 25
  • 170
  • 301
Athena Wisdom
  • 6,101
  • 9
  • 36
  • 60

1 Answers1

1

The simplest one is to compare file access date in a loop. You can use the link for examples

# get the the stat_result object
fileStatsObj = os.stat ( filePath )
# Get last access time
accessTime = time.ctime ( fileStatsObj [ stat.ST_ATIME ] )

EDIT: I want to point out this method may be bad for precise access count because of potential race conditions. The most reliable way is to force process to increment access count in a separate file with readonly mode.

Digoya
  • 121
  • 3
  • Great! Are there file systems that this method will not work on? – Athena Wisdom Jun 25 '21 at 16:26
  • @AthenaWisdom: It's portable, but you will need to do this continuously to "monitor" a file. – martineau Jun 25 '21 at 16:32
  • According to https://stackoverflow.com/a/34572372/13048707 with modern versions of python os.stat produce pretty reliable values. MacOS share same unix architecture like linux so os.stat will work there as well – Digoya Jun 25 '21 at 16:34
  • 2
    According to https://docs.python.org/3/library/os.html#os.stat_result you should always use `st_atime_ns` if you care about accurate results since `st_atime` will only produce a one day resolution for Windows FAT and FAT32 filesystems. – Axe319 Jun 25 '21 at 16:37