3

I'm just curious to hear other people's thoughts on why this specific piece of code might might run slower in Python 3.11 than in Python 3.10.6. Cross-posted from here. I'm new here - please kindly let me know if I'm doing something wrong.

test.py script:

import timeit
from random import random


def run():
    for i in range(100):
        j = random()


t = timeit.timeit(run, number=1000000)
print(t)

Commands:

(base) conda activate python_3_10_6

(python_3_10_6) python test.py
5.0430680999998

(python_3_10_6) conda activate python_3_11

(python_3_11) python test.py
5.801756700006081
sideshowbarker
  • 81,827
  • 26
  • 193
  • 197
mh0w
  • 91
  • 7
  • 1
    Unfortunately, community here is not really helpful in this type of questions and you will be getting lots of downvotes. – Nijat Mursali Nov 13 '22 at 12:51
  • @NijatMursali Ah, righto. Thanks for the heads up. I guess I'll delete the post/question if it's not considered appropriate for this space. – mh0w Nov 13 '22 at 12:52
  • Is this the reditt post copied over here? – Andrew Nov 13 '22 at 12:56
  • 1
    It's a feature. code excecution depends on many factors. The number of instructions passed to proccessor varies according to it's avaialblity. You can even observer if you run same code again and again you will get diffrent time durations. Micro second delay can be expected. – Bhargav - Retarded Skills Nov 13 '22 at 12:57
  • 2
    Andrew: Yes. Let me know if cross posting in this way is not considered acceptable and I'll delete the post/question. Bhargav: Regarding small variations between runs, I have tried running the script several times with both versions of Python and have consistently found that 3.11 runs this particular script slightly more slowly. It's a tiny difference in real-world terms, I realise. More generally, I do appreciate that Python 3.11 being 'faster' according to the developers does not mean all scripts will run faster. – mh0w Nov 13 '22 at 13:01
  • 3
    This is interesting. I get similar results. When I first got 3.11 and read that it was faster, I ran an algorithm that I wrote on 3.10 which involved over 100 million modular exponentiations and didn't observe any speedup (though in that case I didn't observe any slowdown either). – John Coleman Nov 13 '22 at 13:03
  • 1
    You can ask on meta if cross-posting like this is okay. Cross posting on multiple Stack Exchange communities is definitely not okay, but Reddit isn't a Stack Exchange community. – John Coleman Nov 13 '22 at 13:07
  • 1
    First why the heck should people object to a cross posting on Reddit? It's not so much a cross posting notice as "I've asked this question on Reddit". No need to meta ask anything IMHO. Second, knowing how performance works is very much on topic for software development here, so the close votes are unwarranted. [What's the most upvoted Q on this site](https://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-processing-an-unsorted-array) ? And how is it substantially different from this one? Third, good info from answer, making this likely a good Q. – JL Peyret Nov 13 '22 at 17:40

1 Answers1

7

This looks like it's probably the PEP 659 optimizations not paying off for random.random.

PEP 659 is an effort to JIT-optimize many common operations. (Not JIT compilation, but definitely JIT optimization.) It pays off for most Python code, but I think random.random isn't covered.

random.random is a method (of a hidden random.Random instance) written in C, with no arguments other than self, so it should be using the METH_NOARGS calling convention. This calling convention has no specialized fast path. Both specialize_c_call and _Py_Specialize_Call just bail out instead of specializing the call.

When PEP 659 doesn't pay off, the work that goes into supporting it is just overhead. I'm not sure what parts contribute how much overhead, but the bytecode is longer than before, due to generating PRECALL and CALL instructions (although I think there's some work going on to improve that), plus attempting specialization and tracking when to attempt specialization has its own overhead.

user2357112
  • 260,549
  • 28
  • 431
  • 505