Is there any way to get milliseconds and its fraction part from 1970 using time.h in c language?
-
You can't get the fraction part in a platform independent way. Which platform are you focusing on? – dalle Dec 23 '09 at 11:56
-
I am following the standard of ANSI C, so that my application will be platform independent. Currently I am on window platform. – Siddiqui Dec 23 '09 at 12:05
6 Answers
This works on Ubuntu Linux:
#include <sys/time.h>
...
struct timeval tv;
gettimeofday(&tv, NULL);
unsigned long long millisecondsSinceEpoch =
(unsigned long long)(tv.tv_sec) * 1000 +
(unsigned long long)(tv.tv_usec) / 1000;
printf("%llu\n", millisecondsSinceEpoch);
At the time of this writing, the printf() above is giving me 1338850197035. You can do a sanity check at the TimestampConvert.com website where you can enter the value to get back the equivalent human-readable time (albeit without millisecond precision).

- 8,830
- 3
- 31
- 43

- 38,621
- 48
- 169
- 217
If you want millisecond resolution, you can use gettimeofday() in Posix. For a Windows implementation see gettimeofday function for windows.
#include <sys/time.h>
...
struct timeval tp;
gettimeofday(&tp);
long int ms = tp.tv_sec * 1000 + tp.tv_usec / 1000;

- 4,001
- 3
- 20
- 21
It's not standard C, but gettimeofday()
is present in both SysV and BSD derived systems, and is in POSIX. It returns the time since the epoch in a struct timeval
:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};

- 233,326
- 40
- 323
- 462
For Unix and Linux you could use gettimeofday.
For Win32 you could use GetSystemTimeAsFileTime and then convert it to time_t + milliseconds:
void FileTimeToUnixTime(FILETIME ft, time_t* t, int* ms)
{
LONGLONG ll = ft.dwLowDateTime | (static_cast<LONGLONG>(ft.dwHighDateTime) << 32);
ll -= 116444736000000000;
*ms = (ll % 10000000) / 10000;
ll /= 10000000;
*t = static_cast<time_t>(ll);
}

- 18,057
- 5
- 57
- 81
-
1gettimeofday() does what is needed. Here's a code example: http://www.docs.hp.com/en/B9106-90009/gettimeofday.2.html So what OSes is Aman trying to support? – tpgould Dec 23 '09 at 12:12
-
-
@aioobe Hence why it's [site](https://meta.stackexchange.com/a/8259/217640) [policy](https://meta.stackexchange.com/a/7659/217640) to quote the relevant passage of (or summarize) any link you include. – Braden Best May 07 '19 at 15:30
-
// the system time
SYSTEMTIME systemTime;
GetSystemTime( &systemTime );
// the current file time
FILETIME fileTime;
SystemTimeToFileTime( &systemTime, &fileTime );
// filetime in 100 nanosecond resolution
ULONGLONG fileTimeNano100;
fileTimeNano100 = (((ULONGLONG) fileTime.dwHighDateTime) << 32) + fileTime.dwLowDateTime;
//to milliseconds and unix windows epoche offset removed
ULONGLONG posixTime = fileTimeNano100/10000 - 11644473600000;
return posixTime;

- 21
- 1
-
2[How do I write a good answer to a question? - Meta Stack Exchange](https://meta.stackexchange.com/a/7662/217640). Your answer seems to be heavily reliant on a specific OS/environment (what about non-POSIX environments?), and lacks context. For example, POSIX cstdlib does not use all-capital type names for *any* of the datatypes it specifies--those had to be defined elsewhere. You appear to be using a library. It's fine to mention that "this answer pertains to X class of systems with Y library, I am not sure how to do it with Z system, though", but code-only answers like this are hardly useful – Braden Best May 07 '19 at 15:48
Unix time or Posix time is the time in seconds since the epoch you mentioned.
bzabhi's answer is correct: you simply multiply the Unix timestamp by 1000 to get milliseconds.
Be aware that all millisecond values returned by relying on the Unix timestamp will be multiples of 1000 (like 12345678000). The resolution is still only 1 second.
You can't get the fraction part
The comment from Pavel is correct also. The Unix timestamp does not take into account leap seconds. This makes it even less wise to rely on a conversion to milliseconds.
-
Any other library which can get the exact milisecond including its fraction part????? – Siddiqui Dec 23 '09 at 12:00
-
The Unix timestamp is about as fundamental as we can go. It must be that the designers of Unix thought one second resolution was enough. Then again the overhead of maintaining a 1ms resolution was probably beyond early Unix systems. – pavium Dec 23 '09 at 12:04