Motivation: I have a file that contains a list of events. I would like to know how old the oldest event is. It seems this should be the files ctime.
This is a perfect use case for a sqlite database (or perhaps even a PostGreSQL one, if your application can be run either on several Linux hosts -sharing a common database server- or in different Linux processes), or at least a GDBM indexed file. BTW, what exactly is an event for your application, and how is each event represented in the file? If you use any relational database, invest your efforts on designing well enough the database schema, learn about database normalization and cleverly design suitable database indexes.
And I would register each event in that file or database with an explicit event addition time. See time(7) for more.
Maybe you are considering a huge volume of data (many terabytes). Then look also into this answer.
Be aware that your processor is a lot faster than your disk (even an SSD one). In practice, a significant part of your file data may sit in the page cache (so getting more RAM could improve performance significantly).
See also https://www.linuxatemyram.com/ and http://norvig.com/21-days.html for useful insights.
If performance really matters to you, consider recoding your application in some compiled language implementation (C++ with GCC, Rust, Ocaml, SBCL, Go....). Most of them are significantly faster than Python.
Be aware that disk space is cheaper than CPU time, which is cheaper than your developer's time and efforts.