2

Every so often I (re)compile some C (or C++) file I am working on -- which by the way succeeds without any warnings -- and then I execute my program only to realize that nothing has changed since my previous compilation. To keep things simple, let's assume that I added an instruction to my source to print out some debugging information onto the screen, so that I have a visual evidence of trouble: indeed, I compile, execute, and unexpectedly nothing is printed onto the screen.

This happened me once when I had a buggy code (I ran out of the bounds of a static array). Of course, if your code has some kind of hidden bug (What are all the common undefined behaviours that a C++ programmer should know about?) the compiled code can be pretty much anything.

This happened me twice when I used some ridiculously slow network hard drive which -- I guess -- simply did not update my executable file after compilation, and I kept running-and-running the old version, despite the updated source. I just speculate here, and feel free to correct me, if such a phenomenon is impossible, but I suspect it has had to do something with certain processes waiting for IO.

Well, such things could of course happen (and they indeed do), when you execute an old version in the wrong directory (that is: you execute something similar, but actually completely unrelated to your source).

It is happening again, and it annoys me enough to ask: how do you make sure that your executable is matching the source you are working on? Should I compare the date strings of the source and the executable in the main function? Should I delete the executable prior compilation? I guess people might do something similar by means of version control.


Note: I was warned that this might be a subjective topic likely doomed to be closed.

Community
  • 1
  • 1
Matsmath
  • 1,164
  • 2
  • 21
  • 40
  • I do compile and run in one command i.e. `make hoge && ./hoge`, but this won't prevent me from compiling without saving what is edited on the editor. – MikeCAT Apr 30 '16 at 15:17
  • Indeed, forgetting to save your source file is another source of such troubles... – Matsmath Apr 30 '16 at 15:18
  • Process, process, process. Automate builds and deployment as much as possible so that all you have to do is type `./build foo` and the build script automagically puts everything where it needs to go, every time. Any manual method is doomed to failure. – John Bode Apr 30 '16 at 15:28
  • I can run my compiler from a batch file, which deletes any object files and the executable, before compiling. My programs also announce their name and version number eitherr on the terminal, or in the window title (as appropriate). – Weather Vane Apr 30 '16 at 16:50
  • @Weather would your file be then *really* deleted? What if the executable is currently in use? See http://unix.stackexchange.com/questions/49299/what-is-linux-doing-differently-that-allows-me-to-remove-replace-files-where-win. – Matsmath Apr 30 '16 at 16:53
  • I can't delete it: if I try I get the message "access denied". If I just leave it, the linker gives an error, because it cannot create the new executable. (Windows). – Weather Vane Apr 30 '16 at 16:56

2 Answers2

1

Just use ol' good version control possibilities

  1. In easy case you can just add (any) visible version-id in the code and check it (hash, revision-id, timestamp)
  2. If your project have a lot of dependent files and you suspect older version, than "latest", in produced code, you can (except, obvioulsly, good makefile-rules) monitor also version of every file, used for building code (VCS-dependent, but not so heavy trick)
Lazy Badger
  • 94,711
  • 9
  • 78
  • 110
0

Check the timestamp of your executable. That should give you a hint regarding whether or not it is recent/up-to-date. Alternatively, calculate a checksum for your executable and display it on startup, then you have a clue that if the csum is the same the executable was not updated.

Jesper Juhl
  • 30,449
  • 3
  • 47
  • 70