110

I can give it floating point numbers, such as

time.sleep(0.5)

but how accurate is it? If i give it

time.sleep(0.05)

will it really sleep about 50 ms?

codeforester
  • 39,467
  • 16
  • 112
  • 140
Claudiu
  • 224,032
  • 165
  • 485
  • 680

13 Answers13

91

The accuracy of the time.sleep function depends on your underlying OS's sleep accuracy. For non-realtime OS's like a stock Windows the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms.

Update: Like mentioned in the docs cited below, it's common to do the sleep in a loop that will make sure to go back to sleep if it wakes you up early.

I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel (with the RT_PREEMPT patch set) by installing the rt kernel package (at least in Ubuntu 10.04 LTS).

EDIT: Correction non-realtime Linux kernels have minimum sleep interval much closer to 1ms then 10ms but it varies in a non-deterministic manner.

Sukrit Kalra
  • 33,167
  • 7
  • 69
  • 71
Joseph Lisee
  • 3,439
  • 26
  • 21
  • 11
    Actually, Linux kernels have defaulted to a higher tick rate for quite a while, so the "minimum" sleep is much closer to 1ms than 10ms. It's not guaranteed--other system activity can make the kernel unable to schedule your process as soon as you'd like, even without CPU contention. That's what the realtime kernels are trying to fix, I think. But, unless you really need realtime behavior, simply using a high tick rate (kernel HZ setting) will get you not-guaranteed-but-high-resolution sleeps in Linux without using anything special. – Glenn Maynard Jul 15 '09 at 21:44
  • 1
    Yes you are right, I tried with Linux 2.6.24-24 and was able to get pretty close to 1000 Hz update rates. At the time I was doing this I was also running the code on Mac and Windows, so I probably got confused. I know windows XP at least has a tick rate of about 10ms. – Joseph Lisee Aug 11 '10 at 15:55
  • On Windows 8 I get just under 2ms – markmnl May 08 '14 at 02:23
  • 4
    Also the accuracy is not just dependent on the OS but what the OS is doing on both Windows and Linux if they are busy doing something more important `sleep()` from the docs "the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system". – markmnl May 08 '14 at 02:39
74

People are quite right about the differences between operating systems and kernels, but I do not see any granularity in Ubuntu and I see a 1 ms granularity in MS7. Suggesting a different implementation of time.sleep, not just a different tick rate. Closer inspection suggests a 1μs granularity in Ubuntu by the way, but that is due to the time.time function that I use for measuring the accuracy. Linux and Windows typical time.sleep behaviour in Python

Wilbert
  • 1,618
  • 15
  • 13
  • 12
    It's interesting how Linux has chosen to always sleep for slightly longer than requested, whereas Microsoft have chosen the opposite approach. – jleahy Jul 23 '13 at 18:56
  • 3
    @jleahy - the linux approach makes sense to me: sleep is really a release of execution priority for an amount of time after which you once again submit yourself to the will of the scheduler (which may or may not schedule you for execution right away). – underrun Sep 18 '13 at 15:55
  • On Windows 8 I do not see Windows sleeping for less than than the time requested also I am able to sleep for less than 1ms but anywhere below 1ms I get 0.00009ms! (perhaps it is not sleeping at all) – markmnl May 08 '14 at 02:32
  • I gave another answer to this question using OS X Yosemite. Completely different behavior from either Windows or Linux. – Tim Supinie Jun 05 '15 at 17:26
  • 3
    how did you get the results? Could your provide the source code? The graph looks like an artifact of using different timers for measuring the time and the sleep (In principle, you could even [use the drift between the timers as a source of randomness](http://stackoverflow.com/a/28721505/4279)). – jfs Jun 05 '15 at 19:36
  • 3
    @J.F. Sebastian - The function that I used is in https://www.socsci.ru.nl/wilberth/computer/sleepAccuracy.html . The third graph there shows an effect similar to what you see, but of only 1‰. – Wilbert Jun 11 '15 at 09:55
  • @Wilbert: `time.time()` on Windows might be worse than on Linux. Have your tried to run it using `timeit.default_timer` instead? – jfs Jun 11 '15 at 10:40
  • 1
    @J.F. Sebastian I use time.clock() on windows – Wilbert Jun 11 '15 at 14:16
  • 1
    CPython `time.sleep()` uses milliseconds on Windows, so that explains the lack of sub-millisecond sleeps. – Yann Vernier Jun 22 '17 at 07:57
33

Here's my follow-up to Wilbert's answer: the same for Mac OS X Yosemite, since it's not been mentioned much yet.Sleep behavior of Mac OS X Yosemite

Looks like a lot of the time it sleeps about 1.25 times the time that you request and sometimes sleeps between 1 and 1.25 times the time you request. It almost never (~twice out of 1000 samples) sleeps significantly more than 1.25 times the time you request.

Also (not shown explicitly) the 1.25 relationship seems to hold pretty well until you get below about 0.2 ms, after which it starts get a little fuzzy. Additionally, the actual time seems to settle to about 5 ms longer than you request after the amount of time requested gets above 20 ms.

Again, it appears to be a completely different implementation of sleep() in OS X than in Windows or whichever Linux kernal Wilbert was using.

Tim Supinie
  • 983
  • 1
  • 9
  • 9
  • Could you upload the source code for the benchmark to github/bitbucket? – jfs Jun 05 '15 at 19:37
  • 3
    I've tried [it](http://pastebin.com/5cmMaJjb) on my machine. [The result is similar to @Wilbert's answer](http://i.stack.imgur.com/l8L9M.png). – jfs Jun 05 '15 at 20:08
  • I'd guess that the sleep itself is accurate but Mac OS X scheduling is not accurate enough to provide CPU fast enough so that the wake from the sleep is delayed. If accurate wake up time is important, it seems that sleep should be set to 0.75 times the actually requested and check the time after wake up and repeatedly sleep for less and less at time until the correct time. – Mikko Rantalainen May 11 '20 at 08:18
  • I can achieve very accurate results on this test by enabling THREAD_TIME_CONSTRAINT_POLICY... See [this script](https://github.com/histed/PyToolsMH/blob/master/pytoolsMH/macTiming.py) which derives from the [psychopy](https://github.com/psychopy/psychopy/blob/dev/psychopy/platform_specific/darwin.py]) project ... **Warning**: I am not an OS-guy and you should probably be careful when manipulating scheduling policies. – A.E Mar 13 '23 at 02:21
28

From the documentation:

On the other hand, the precision of time() and sleep() is better than their Unix equivalents: times are expressed as floating point numbers, time() returns the most accurate time available (using Unix gettimeofday where available), and sleep() will accept a time with a nonzero fraction (Unix select is used to implement this, where available).

And more specifically w.r.t. sleep():

Suspend execution for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.

Stephan202
  • 59,965
  • 13
  • 127
  • 133
  • 2
    Can anyone explain the "because any caught signal will terminate the sleep() following execution of that signal’s catching routine"? Which signals is it referring to? Thanks! – Diego Herranz Oct 07 '13 at 12:27
  • 2
    Signals are like notifications that the OS manages (http://en.wikipedia.org/wiki/Unix_signal), it means that if the OS caught a signal, the sleep() is finished after treating that signal. – ArianJM Mar 30 '14 at 18:45
18

if you need more precision or lower sleep times, consider making your own:

import time

def sleep(duration, get_now=time.perf_counter):
    now = get_now()
    end = now + duration
    while now < end:
        now = get_now()
Lars
  • 1,869
  • 2
  • 14
  • 26
  • Why is not just this implemented behind time.sleep()? This works so much better for short sleep values. – Jodo Nov 01 '21 at 10:53
  • great answear. Thank you! This is what I looking for :) – Mlody87 Dec 22 '21 at 17:19
  • 1
    that's weird, because sleep should transfer control to OS in order to handle IO. – iperov May 28 '22 at 05:11
  • 1
    @Jodo because this function results in 100% CPU usage (which is obviously not practical for many use cases of `sleep`). The only way you could reduce the CPU usage of this function is to add a call to... `sleep` :) – 101 Feb 12 '23 at 11:33
17

Why don't you find out:

from datetime import datetime
import time

def check_sleep(amount):
    start = datetime.now()
    time.sleep(amount)
    end = datetime.now()
    delta = end-start
    return delta.seconds + delta.microseconds/1000000.

error = sum(abs(check_sleep(0.050)-0.050) for i in xrange(100))*10
print "Average error is %0.2fms" % error

For the record, I get around 0.1ms error on my HTPC and 2ms on my laptop, both linux machines.

Ants Aasma
  • 53,288
  • 15
  • 90
  • 97
  • 13
    Empirical testing will give you a very narrow view. There are many kernels, operating systems and kernel configurations that affect this. Older Linux kernels default to a lower tick rate, which results in a greater granularity. In the Unix implementation, an external signal during the sleep will cancel it at any time, and other implementations might have similar interruptions. – Glenn Maynard Jul 15 '09 at 21:35
  • 6
    Well of course the empirical observation is not transferable. Aside from operating systems and kernels there are a lot of transient issues that affect this. If hard real time guarantees are required then the whole system design from hardware up needs to be taken into consideration. I just found the results relevant considering the statements that 10ms is the minimum accuracy. I'm not at home in the Windows world, but most linux distros have been running tickless kernels for a while now. With multicores now prevalent it's pretty likely to get scheduled really close to the timeout. – Ants Aasma Jul 15 '09 at 22:05
8

A small correction, several people mention that sleep can be ended early by a signal. In the 3.6 docs it says,

Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale).

Cristian Ciupitu
  • 20,270
  • 7
  • 50
  • 76
user405
  • 579
  • 7
  • 13
7

The time.sleep method has been heavily refactored in the upcoming release of Python (3.11). Now similar accuracy can be expected on both Windows and Unix platform, and the highest accuracy is always used by default. Here is the relevant part of the new documentation:

On Windows, if secs is zero, the thread relinquishes the remainder of its time slice to any other thread that is ready to run. If there are no other threads ready to run, the function returns immediately, and the thread continues execution. On Windows 8.1 and newer the implementation uses a high-resolution timer which provides resolution of 100 nanoseconds. If secs is zero, Sleep(0) is used.

Unix implementation:

  • Use clock_nanosleep() if available (resolution: 1 nanosecond);
  • Or use nanosleep() if available (resolution: 1 nanosecond);
  • Or use select() (resolution: 1 microsecond).

So just calling time.sleep will be fine on most platforms starting from python 3.11, which is a great news ! It would be nice to do a cross-platform benchmark of this new implementation similar to the @wilbert 's one.

milembar
  • 919
  • 13
  • 17
3

You can't really guarantee anything about sleep(), except that it will at least make a best effort to sleep as long as you told it (signals can kill your sleep before the time is up, and lots more things can make it run long).

For sure the minimum you can get on a standard desktop operating system is going to be around 16ms (timer granularity plus time to context switch), but chances are that the % deviation from the provided argument is going to be significant when you're trying to sleep for 10s of milliseconds.

Signals, other threads holding the GIL, kernel scheduling fun, processor speed stepping, etc. can all play havoc with the duration your thread/process actually sleeps.

codeforester
  • 39,467
  • 16
  • 112
  • 140
Nick Bastin
  • 30,415
  • 7
  • 59
  • 78
  • 3
    The documentation says otherwise: > The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. – Glenn Maynard Jul 15 '09 at 21:36
  • Ah fair point, fixed the post, although getting longer sleeps() is much more likely than shorter ones. – Nick Bastin Jul 15 '09 at 22:25
  • 1
    Two and a half years later ... the documentation still lies. On Windows, signals will not terminate sleep(). Tested on Python 3.2, WinXP SP3. – Dave Dec 08 '11 at 15:31
  • Yes but signals pre-empting sleep is unusal, e.g. KILL, the documentation also says: "Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system." which is more typical. – markmnl May 08 '14 at 02:34
  • 1
    Singnals and Windows is just silly. On Windows the Python time.sleep() waits on a ConsoleEvent to capture stuff like Ctrl-C. – schlenk Nov 28 '14 at 23:34
  • @Dave: [Python 3.5](https://docs.python.org/3/library/time.html#time.sleep): *"The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception"* – jfs Aug 09 '17 at 07:56
2
def test():
    then = time.time()  # get time at the moment
    x = 0
    while time.time() <= then+1:  # stop looping after 1 second
        x += 1
        time.sleep(0.001)  # sleep for 1 ms
    print(x)

On windows 7 / Python 3.8 returned 1000 for me, even if i set the sleep value to 0.0005

so a perfect 1ms

Walter
  • 184
  • 2
  • 13
1
def start(self):
    sec_arg = 10.0
    cptr = 0
    time_start = time.time()
    time_init = time.time()
    while True:
        cptr += 1
        time_start = time.time()
        time.sleep(((time_init + (sec_arg * cptr)) - time_start ))

        # AND YOUR CODE .......
        t00 = threading.Thread(name='thread_request', target=self.send_request, args=([]))
        t00.start()

Do not use a variable to pass the argument of sleep (), you must insert the calculation directly into sleep ()


And the return of my terminal

1 ───── 17:20:16.891 ───────────────────

2 ───── 17:20:18.891 ───────────────────

3 ───── 17:20:20.891 ───────────────────

4 ───── 17:20:22.891 ───────────────────

5 ───── 17:20:24.891 ───────────────────

....

689 ─── 17:43:12.891 ────────────────────

690 ─── 17:43:14.890 ────────────────────

691 ─── 17:43:16.891 ────────────────────

692 ─── 17:43:18.890 ────────────────────

693 ─── 17:43:20.891 ────────────────────

...

727 ─── 17:44:28.891 ────────────────────

728 ─── 17:44:30.891 ────────────────────

729 ─── 17:44:32.891 ────────────────────

730 ─── 17:44:34.890 ────────────────────

731 ─── 17:44:36.891 ────────────────────

forrest
  • 21
  • 2
1

A high-precision variation of time.sleep().

Logic: time.sleep() has poor precision < 5ms, so given a time window (e.g. 1 second) it splits the remaining time into remaining_time / 2, cutting the sleep time in half every iteration. When it gets to < 20ms, it moves to a while loop (CPU-intensive), and then breaks when < 0ms remaining.

import time
def high_precision_sleep(duration):
    start_time = time.perf_counter()
    while True:
        elapsed_time = time.perf_counter() - start_time
        remaining_time = duration - elapsed_time
        if remaining_time <= 0:
            break
        if remaining_time > 0.02:  # Sleep for 5ms if remaining time is greater
            time.sleep(max(remaining_time/2, 0.0001))  # Sleep for the remaining time or minimum sleep interval
        else:
            pass

Time test:

script_start_time = time.perf_counter()
time.sleep(1)
time_now = time.perf_counter()
elapsed_time = (time_now - script_start_time) * 1000
print("[%.6f] time.sleep" % elapsed_time)

script_start_time = time.perf_counter()
high_precision_sleep(1)
time_now = time.perf_counter()
elapsed_time = (time_now - script_start_time) * 1000
print("[%.6f] high_precision_sleep" % elapsed_time)

Results:

[1007.893800] time.sleep
[1000.004200] high_precision_sleep
leenremm
  • 1,083
  • 13
  • 19
-2

Tested this recently on Python 3.7 on Windows 10. Precision was around 1ms.