3

I have some sectors on my drive with poor reading. I could measure the reading time required by each sector and then compare the time of the good sectors and the bad sectors.

I could use a timer of the processor to make the measurements. How do I write a program in C/Assembly that measures the exact time it takes for each sector to be read?

So the procedure would be something like this:

Start the timer
Read the disk sector
Stop the timer
Read the time measured by the timer
rigon
  • 1,310
  • 4
  • 15
  • 37
  • The operating system isn't important. It could be Windows or Linux. – rigon Feb 07 '11 at 23:37
  • See http://stackoverflow.com/questions/538609/high-resolution-timer-with-c-and-linux – Jim Mischel Feb 07 '11 at 23:40
  • Just a warning that you may never get consistent results from timing disk reads, only averages + best & worse cases. Disk read times are determined by (among other things) the following. - Is the data requested in Cache? (Disk may never even be read. Data just taken from memory cache) - Where is the disk head - vs - Where is the data on the disk (Inside track - vs - outside track) - Where is the head -vs - where is the data (rotationally) on the disk – Joe Cullity Feb 13 '11 at 17:15

2 Answers2

5

The most useful functionality is the "rdtsc" instruction (ReaD Time Stamp Counter) which is incremented every time the processor's internal clock increments. For a 3 Ghz processor it increments 3 billion times per second. It returns a 64 bit unsigned integer containing the number of clock cycles since the processor was powered on.

Obviously the difference between two read-outs is the number of elapsed clock cycles consumed for executing the code sequence in-between. For a 3 Ghz machine you could use any of the following algorithms to convert to parts of seconds:

(time_difference+150)/300 gives a rounded off elapsed time in 0.1 us (tenths of microseconds) (time_difference+1500)/3000 gives a rounded off elapsed time in us (microseconds) (time_difference+1500000/3000000 gives a rounded off elapsed time in ms (milliseconds)

The 0.1 us algorithm is the most precise value you can use without having to adjust for read-out overhead.

Olof Forshell
  • 3,169
  • 22
  • 28
  • Search for rdtsc among you compiler's intrinsic, non-standard functions. In C for x86-32 it could be coded as a separate unsigned __int64 name (void) function with _asm{rdtsc} as its only content. rdtsc will place the result in edx:eax which are (always?) the registers used for returning __int64s from functions. In function form you need to be measure the overhead though, which will be significant. – Olof Forshell Feb 28 '11 at 08:28
3

In C, the function that would be most useful is clock() in time.h.

To time something, put calls to clock() around it, like so:

clock_t start, end;
float elapsed_time;
start = clock();
read_disk_sector();
end = clock();
elapsed_time = (float)(end - start) / (float)CLOCKS_PER_SEC;
printf("Elapsed time: %f seconds\n", elapsed_time);

This code prints out the number of seconds the read_disk_sector() function call took.

You can read more about the clock function here: http://www.cplusplus.com/reference/clibrary/ctime/clock/

Eric Finn
  • 8,629
  • 3
  • 33
  • 42
  • 1
    Note: `clock_t` might be an integer type. If that's the case, `elapsed_time` will have a 1 second resolution and will likely always be `0` for good sectors. It might even be 0 for bad sectors if the call completes in less than a second, which certainly seems possible if the sector isn't too screwed up. – Michael Burr Feb 07 '11 at 23:31
  • @Michael Burr: I'm quite sure that `clock_t` will always be an integer, half a clock unit doesn't make much sense. In any case, one doesn't have to throw away the remainder. – Hasturkun Feb 07 '11 at 23:42
  • That is correct. I edited my answer to change elapsed_time to a float and to calculate it as a float. – Eric Finn Feb 07 '11 at 23:45
  • As a side note, according to my manpage, `clock()` returns a value based on the CPU time used, not wall-clock – Hasturkun Feb 07 '11 at 23:47
  • 2
    @Hasturkun: this is platform-dependent, on Windows it's wall-clock time and on Linux it's CPU time. – Eric Finn Feb 07 '11 at 23:52
  • 2
    @Eric Finn: This is true, though the standard does say that `clock()` is supposed to return *"the implementation’s best approximation to the processor time used by the program"* (apparently, on Windows, the implementation's "best approximation" is the wall-clock time since it was started ;) – caf Feb 08 '11 at 01:30