3

I have two binary files that are related one to another (meaning, when one file's records are updated, the other file's matching records should be updated as well). both files are binary files stored on disk.

The updation will look something like this:

UpdateFirstFile() -- first file is updated.....

UpdateSecondFile() -- second file is updated...

what methods should I use to make sure that either BOTH files are updated or NONE of the files is updated?

Both files are flat files (of size 20[MB] each). I know a database would have solved this problem, yet I am note using one due to overhead reasons (every table would require much more than 20[MB] to be stored, and I am short on space and have 1000s of such files...).

Any ideas?

user3262424
  • 7,223
  • 16
  • 54
  • 84
  • What operating system are you on? I'm still skeptical that you can't use something like SQLite, but I'll play along :) – Mahmoud Abdelkader Mar 27 '11 at 23:17
  • I'm using Ubuntu Linux. SQLite is great, but has a huge overhead. Try storing 5,000,000 integers there. Instead of capturing 20 [MB], it will use something close to 60 [MB]. – user3262424 Mar 27 '11 at 23:19
  • If you were using Windows you could use transacted file I/O introduced in Vista. I don't know if there's anything equivalent on Linux. – David Heffernan Mar 27 '11 at 23:22
  • Duplicate. Please search. This has been covered. http://stackoverflow.com/search?q=reliable+file+update+%5Bpython%5D Are all relevant. – S.Lott Mar 27 '11 at 23:33
  • @s.lott I don't see anything of relevance in there. If you believe this to be a duplicate click on close and enter the question number of the duplicate. Please learn the system. – David Heffernan Mar 28 '11 at 00:05
  • @David Heffernan: I believe that someone asking the question should search first. I further believe that they are obligated to demonstrate that their question isn't a duplicate by including references to other, related questions. – S.Lott Mar 28 '11 at 00:17
  • @s.lott perhaps they did search first. You didn't find any related questions. – David Heffernan Mar 28 '11 at 00:20
  • @David Heffernan: I seem to have found hundreds. Perhaps you're not following the link I provided. – S.Lott Mar 28 '11 at 00:23
  • possible duplicate of [How to safely write to a file?](http://stackoverflow.com/questions/1812115/how-to-safely-write-to-a-file) – S.Lott Mar 28 '11 at 00:25
  • @s.lott not one of the questions you point at concerns atomic file operations. It's easy to find questions. Not so easy to find duplicates. If you find a duplicate then please vote to close. You have ample reputation. – David Heffernan Mar 28 '11 at 00:26
  • @David Heffernan: "You have ample reputation". Irrelevant. It's up to @user540009 to search. Not me. – S.Lott Mar 28 '11 at 00:27
  • @s.lott as you have just discovered the search on Stack Overflow is poor. Your search query that you originally posted is hopeless. You have now dug up a plausible duplicate but it differs in that it considers just a single file rather than two linked files. Perhaps there's an obvious way to extend the approach. – David Heffernan Mar 28 '11 at 00:31
  • S.Lott: would you mind sharing some wisdom on how to extend the approach mentioned in the link so that it works with 2 linked files? – user3262424 Mar 28 '11 at 00:36
  • @David Heffernan "Perhaps there's an obvious way to extend the approach" Precisely. – S.Lott Mar 28 '11 at 00:54

3 Answers3

4

The generic approach would be to implement transactions with some kind of rollback journal.

For example you could use a separate file to record the current contents of each part in each file that will be affected by the update. After your transaction is done, you remove the journal file.

The mere presence of the journal file when starting a transaction would mean that another transaction is either pending or has been interrupted. In that case you use the contents of the journal to reverse any file change that went through before the interruption.

This way you would ensure the atomicity of the update operation. I will leave any other parts of ACID that you need as an exercise to the reader.

Keep in mind that doind this The Right Way is harder than it sounds, especially if you have multiple processes updating the same files.

thkala
  • 84,049
  • 23
  • 157
  • 201
2

Do what the RDBMS engines do.

Write an "update sequence number" in each file.

You cannot ever guarantee that both files are written.

However, you can compare the update sequence numbers to see if the files have the same sequence number.

If the sequence numbers disagree, it's logically equivalent to no file having been written. Delete the files and use the backup copies.

If the sequence numbers gree, it's logically equivalent to both having been written.

S.Lott
  • 384,516
  • 81
  • 508
  • 779
0

Both files are flat files (of size 20[MB] each). I know a database would have solved this problem, yet I am note using one due to overhead reasons (every table would require much more than 20[MB] to be stored, and I am short on space and have 1000s of such files...).

You might try HDF5 format (designed to store and organize large amounts of numerical data) to store both datasets in a single file or to store all your data (all 1000s files). It might be simpler than reimplementing database transactions.

jfs
  • 399,953
  • 195
  • 994
  • 1,670