-1

On Ubuntu, I am developing an application in C++ with Qt. I'm using QTimer but it is not precise enough. I am looking for something with resolution better than a millisecond. Is there a way to do this?

my code :

temps= 0;
dataTimer = new QTimer();
connect(dataTimer, SIGNAL(timeout()), this, SLOT(simuler()));
dataTimer->start(1);
BkarimCe
  • 65
  • 7
  • 2
    How about `usleep` in a worker thread? Or have a look at [POSIX timers](http://man7.org/linux/man-pages/man2/timer_create.2.html). – Karsten Koop May 19 '16 at 12:10
  • `QElapsedTimer` allows very high precision time, but not on Linux, unfortunately. There may be a third-party timer that will do what you need. – jonspaceharper May 19 '16 at 12:14
  • @JonHarper Give me an Example please – BkarimCe May 19 '16 at 12:16
  • I don't code on Linux often, so I don't have a solution. Look into Karsten's ideas. – jonspaceharper May 19 '16 at 12:17
  • 3
    [`std::chrono`](http://en.cppreference.com/w/cpp/chrono) – Ivan Aksamentov - Drop May 19 '16 at 12:21
  • 1
    Conciser that a lot of common hardware platforms do not have a realtime clock that runs at the accuracy you require. It is possible on some platforms to re-program the hardware clock. – Richard Critten May 19 '16 at 12:56
  • 1
    How do you think sub-millisecond would be meaningful on any system supporting Qt? I doubt it is ported to any real-time system, so you simply can't have that kind of accuracy. – Lundin May 19 '16 at 13:05
  • 2
    **Why** exactly **do you ask?** Please *edit your question* to motivate it. What kind of application are you coding, and where do tiny delays come from and why do they matter? – Basile Starynkevitch May 19 '16 at 13:06
  • 1
    Such timers are usually only useful when you're interacting with the outside world. It's pointless to ask for them if you don't tell us what the I/O mechanism is - it's likely that the selected mechanism introduces latencies that make sub-millisecond timing pointless. For example, full speed USB devices effectively operate on a 1ms clock and any new transactions will start after the next 1ms tick. – Kuba hasn't forgotten Monica May 19 '16 at 16:50

1 Answers1

3

I believe you won't reliably get sub-milliseconds delay for useful actions in practice (a Linux desktop is not a real-time system). Perhaps compiling your own kernel with CONFIG_PREEMPT might help a little! But this is a strong limitation (and there is no software tricks to overcome that).

As Karsten Koop commented, you might use threads, or POSIX timers. See also time(7)

As Drop commented, if you compile in C++11 mode, consider using <chrono> (it is more or less wrapping clock_gettime in a C++11 conforming way; sou you'll be able to measure small delays).

A Linux specific solution might be to use timerfd_create(2) and poll that fd in the Qt main loop (using QSocketNotifier)

If you just want to measure (not act) time with sub-millisecond precision, just use clock_gettime(2)

Notice that several Qt operations are complex enough to need more than a millisecond to run. And human are not able to perceive (e.g. see on the screen) sub-millisecond delays. A screen is refreshed at 60Hz (or 144Hz for some costly screens), so you cannot see such small delays on it.

If you are not familiar with Linux programming, take time to read Advanced Linux Programming in addition of the hyperlinked material above (the ALP book is older than timerfd_create)

Community
  • 1
  • 1
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547