5

I am trying to write a program that generates a random number based on the internal clock of the system, in such a way that I don't need a seed nor a first value. The first value must be taken from the internal clock by converting year, month, day, hours, minutes, seconds to milliseconds and adding them to the current millisecond in order to get a unique number (Time Stamp). Any help to get these values in C?

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
El GaGa
  • 91
  • 2
  • 7
  • 1
    Note that using the current time as a seed for a random number generator is fine as long as you aren't trying to use it for security too. If you're trying to ensure that you get different values most times you run the program, it'll do. If you're trying to be unpredictable, as in cryptography, it is hopelessly insecure to use the time as a seed for your PRNG (it gives you at most 16 bits of entropy — I'm being generous; it is more like 10 bits, and arguably less than that — whereas you need 128 or more bits of entropy for most cryptographic work). – Jonathan Leffler Mar 29 '14 at 12:59
  • 1
    Possible duplicate of [How to measure time in milliseconds using ANSI C?](http://stackoverflow.com/questions/361363/how-to-measure-time-in-milliseconds-using-ansi-c) and other system specific versions like http://stackoverflow.com/questions/3729169/how-can-i-get-the-windows-system-time-with-millisecond-resolution – Ciro Santilli OurBigBook.com Mar 18 '16 at 22:45

3 Answers3

6

You can use either clock_gettime() or gettimeofday() — or, if you're in a really impoverished environment, ftime() or time(). Make sure you're using a big enough data type to hold the millitime.

For clock_gettime(), the result is a struct timespec with elements tv_sec and tv_nsec. You'd use:

#include <time.h>
#include <stdint.h>

struct timespec t;
clock_gettime(CLOCK_REALTIME, &t);
int64_t millitime = t.tv_sec * INT64_C(1000) + t.tv_nsec / 1000000;

With gettimeofday() (which is officially deprecated, but is more widely available — for example, Mac OS X has gettimeofday() but does not have clock_gettime()), you have a struct timeval with members tv_sec and tv_usec:

#include <sys/time.h>
#include <stdint.h>

struct timeval t;
gettimeofday(&t, 0);
int64_t millitime = t.tv_sec * INT64_C(1000) + t.tv_usec / 1000;

(Note that ftime() was standard in older versions of POSIX but is no longer part of POSIX, though some systems will still provide it for backwards compatibility. It was available in early versions of Unix as the first sub-second time facility, but not as early as 7th Edition Unix. It was added to POSIX (Single Unix Specification) for backwards compatibility, but you should aim to use clock_gettime() if you can and gettimeofday() if you can't.)

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • FWIW emprically I have seen the function `clock_gettime()` (on an Intel Xeon linux system) always return a nanoseconds value rounded to 250, so you may need to divide by 1000000 if using that part of the result. [EDIT - divide by 1000 yields microseconds, as pointed out below] – 6EQUJ5 Mar 29 '14 at 12:44
  • @AndrewMcDonnell: you need to divide the nanoseconds value by one million to get the milliseconds. – Jonathan Leffler Mar 29 '14 at 12:46
  • Of course you are correct; I was just noting that if you needed a bigger range then the last 3 digits are not useful – 6EQUJ5 Mar 29 '14 at 12:47
  • @chux: yes, you're right; something should coerce the result into `int64_t` before the assignment. Using the `LL` suffix is one way to do it, and would be the way I'd use if the type of `millitime` were `long long`. But since I used `int64_t`, it is probably better to use the `INT64_C()` macro defined in `` (along with `int64_t`). – Jonathan Leffler Mar 29 '14 at 15:14
  • how can i insert #include ? i am being told this this library is unidentified – El GaGa Mar 30 '14 at 14:51
  • @ElGaGa: (a) it isn't a library; it is a header — (b) if you don't have it, you can't use it — (c) which platform are you on that you don't have it? Windows is a law unto itself and doesn't go with the POSIX standard all that well. – Jonathan Leffler Mar 30 '14 at 15:11
0

Standard C does not guarantee time accuracy better than a second. If you're on a POSIX system, try the clock* functions.

pmg
  • 106,608
  • 13
  • 126
  • 198
  • @ElGaGa: I have no idea. I suppose, just like C, C++ doesn't guarantee time accuracy better than a second. I don't know what functions beyond the C++ Standard are widely available. – pmg Mar 31 '14 at 17:30
0

If you happen to be doing this in Windows you can use:

unsigned int rand = GetTickCount() % A_PRIME_NUMBER_FOR_EXAMPLE;

[Edit: emphasise modulo something appropriate for your circumstances]

6EQUJ5
  • 3,142
  • 1
  • 21
  • 29
  • That probably requires a very precise system clock. See [here](http://msdn.microsoft.com/en-us/library/windows/desktop/ms724408%28v=vs.85%29.aspx) for documentation. It states that typical resolution is 10--16ms. Also, you probably meant 65535 there. – FRob Mar 29 '14 at 12:49
  • I just meant any arbitrary number. Perhaps I was being a bit to liberal in my interpretation of the question... some numbers are more random than others, etc... – 6EQUJ5 Mar 29 '14 at 12:51
  • Using a prime would have benefits over the magic number 65531. – chux - Reinstate Monica Mar 29 '14 at 15:15
  • GetTickCount() is included in which library ?? – El GaGa Mar 30 '14 at 14:50
  • See link in post by @FRob – 6EQUJ5 Mar 30 '14 at 23:01