There is a text file, which is updated almost every minute. Updating process takes a few seconds, based on the internet connection speed, thus it is not fixed, however, I hope this wouldn't be an issue.
Every time the file is updated, a couple of rows are added to the file. I want to read these added rows and send them for further analysis in another function. Since the function takes quite a while to process, I prefer to process only the last added rows in the function.
Additionally, I might need to have the N last rows from the previous round, since they might be in connection with the new added ones.
How can I read the (last added + N) rows in a text file in Python?
I read the data in a Dataframe and I could use the index to do that, but I have to read all the data in the text file to do that (or at least I think I should).
Note: The chances of having two or more similar rows are slim, but, if I can read the file from the end row upwards, I could compare it with the last read row from previous round and stop it if they match. However, since these similar rows are one of the most important rows I am looking for, I have to add a couple of more rows upwards to take in all the similar rows in my function. It is noteworthy that if similar rows exist, they are stuck together.
Is there any better solution?