95

I found that when I ask something more to Python, python doesn't use my machine resource at 100% and it's not really fast, it's fast if compared to many other interpreted languages, but when compared to compiled languages i think that the difference is really remarkable.

Is it possible to speedup things with a Just In Time (JIT) compiler in Python 3?

Usually a JIT compiler is the only thing that can improve performances in interpreted languages, so i'm referring to this one, if other solutions are available i would love to accept new answers.

Tarik
  • 10,810
  • 2
  • 26
  • 40
guz
  • 1,361
  • 1
  • 10
  • 13
  • 5
    PyPy has JIT: http://doc.pypy.org/en/latest/jit/index.html – rubik Oct 23 '12 at 16:32
  • @rubik thanks but i was looking for a solution for python 3 not python 2, and for the official interpreter, not any other interpreter. – guz Oct 23 '12 at 16:35
  • 1
    Though pypy doesn't yet support python 3. Depending on what you are doing, there are all kinds of ways of improving performance - for example, using better algorithms, parallelisation using the `multiprocssing` or `threading` modules, or writing an extension in C (which can be made easier using [cython](http://cython.org/) or similar software). – James Oct 23 '12 at 16:35
  • Most implementations of Python aren't interpreted but rather compiled to bytecode. – Russell Borogove Oct 23 '12 at 18:45
  • 3
    @rubik, great suggestion. I would add to your list: "use existing extensions (like NumPy)". – jimhark Oct 24 '12 at 00:50

7 Answers7

95

First off, Python 3(.x) is a language, for which there can be any number of implementations. Okay, to this day no implementation except CPython actually implements those versions of the language. But that will change (PyPy is catching up).

To answer the question you meant to ask: CPython, 3.x or otherwise, does not, never did, and likely never will, contain a JIT compiler. Some other Python implementations (PyPy natively, Jython and IronPython by re-using JIT compilers for the virtual machines they build on) do have a JIT compiler. And there is no reason their JIT compilers would stop working when they add Python 3 support.

But while I'm here, also let me address a misconception:

Usually a JIT compiler is the only thing that can improve performances in interpreted languages

This is not correct. A JIT compiler, in its most basic form, merely removes interpreter overhead, which accounts for some of the slow down you see, but not for the majority. A good JIT compiler also performs a host of optimizations which remove the overhead needed to implement numerous Python features in general (by detecting special cases which permit a more efficient implementation), prominent examples being dynamic typing, polymorphism, and various introspective features.

Just implementing a compiler does not help with that. You need very clever optimizations, most of which are only valid in very specific circumstances and for a limited time window. JIT compilers have it easy here, because they can generate specialized code at run time (it's their whole point), can analyze the program easier (and more accurately) by observing it as it runs, and can undo optimizations when they become invalid. They can also interact with interpreters, unlike ahead of time compilers, and often do it because it's a sensible design decision. I guess this is why they are linked to interpreters in people's minds, although they can and do exist independently.

There are also other approaches to make Python implementation faster, apart from optimizing the interpreter's code itself - for example, the HotPy (2) project. But those are currently in research or experimentation stage, and are yet to show their effectiveness (and maturity) w.r.t. real code.

And of course, a specific program's performance depends on the program itself much more than the language implementation. The language implementation only sets an upper bound for how fast you can make a sequence of operations. Generally, you can improve the program's performance much better simply by avoiding unnecessary work, i.e. by optimizing the program. This is true regardless of whether you run the program through an interpreter, a JIT compiler, or an ahead-of-time compiler. If you want something to be fast, don't go out of your way to get at a faster language implementation. There are applications which are infeasible with the overhead of interpretation and dynamicness, but they aren't as common as you'd think (and often, solved by calling into machine code-compiled code selectively).

  • i have eard that Google is interested in making python faster with a JIT for the 3.x release so i was looking for answers. the problem with having different interpreter is just the fact that you end up having more than 1 implementations, also many applications that offers a builtin python console only refers to the official python interpreter. So in the end there is nothing good and ready for python 3 ? – guz Oct 23 '12 at 16:51
  • 8
    @guz Google's Unladden Swallow project was [abandoned](http://www.python.org/dev/peps/pep-3146/#pep-withdrawal) a long time ago, and while some of their work lives on in CPython and elsewhere, their JIT compiler is dead (and never worked well to begin with). I see having multiple implementations **as an advantage** in general, though the point about embedding is a good one. –  Oct 23 '12 at 16:57
  • Does't Java's JIT compile for the JVM? My guess is that Jython would do the same thing. If that is the case, can we not say that Python compiles for its own VM? The `dis` module can show what is actually being run. – Noctis Skytower Oct 23 '12 at 17:32
  • 4
    @NoctisSkytower Most JVMs contain a JIT compiler which compiles Java *bytecode into machine code* (and AFAIK Jython generates JVM bytecode). CPython and PyPy indeed compile Python code to their own internal bytecode prior to running it. But that does not make them JIT compilers in the usual sense (which includes compilation to native code output, and tight integration with other parts of the runtime if there are any). –  Oct 23 '12 at 18:06
  • 2
    @delnan Thanks for the clarification! It is interesting to find out that there are actually multiple levels of compilation then. `source code -> byte code -> native code` And then the microprocessor interprets that into microcode ... – Noctis Skytower Oct 23 '12 at 19:05
  • 2
    I had a relative simple modification of levenshtein distance implemented in python. That routine was called a lot so i re-implemented it in C, but used the python storage types and did no optimization of any sort. So it is basically the same code. Execution time dropped from 5s to 200ms. CPython does a terrible job at running CPU heavy operations. If CPU and RAM are the cause for slow execution time, compiling instead of interpreting will always result in a major speed up. A JIT is a way of increasing performance in a lot of scenarios without loosing convenience for the programmers. – Gellweiler Jul 02 '18 at 10:06
18

The only Python implementation that has a JIT is PyPy. Byt - PyPy is both a Python 2 implementation and a Python 3 implementation.

Community
  • 1
  • 1
Ngure Nyaga
  • 2,989
  • 1
  • 20
  • 30
12

The Numba project should work on Python 3. Although it is not exactly what you asked, you may want to give it a try: https://github.com/numba/numba/blob/master/docs/source/doc/userguide.rst.

It does not support all Python syntax at this time.

rubik
  • 8,814
  • 9
  • 58
  • 88
  • 1
    Numba, as far as I can tell, is not and never intended to be, an implementation of Python. Instead, it's apparently an implementation for a language that looks deceptively like Python, but is actually nothing like it -- sacrificing many language features for performance. Correct me if I'm wrong. Maybe the PyPy developers brainwashed me, but I think that this should not be compared to Python (or even called such) except to state that it's totally unlike Python. –  Oct 23 '12 at 16:59
  • @delnan: That's interesting. Why not call it Python with less features? I don't know the project well but IIUC you have a Python file, then apply the jit decorator and you're done :) Probably this is too optimistic and/or naive. Actually I haven't even gave it a try, although I wanted to... – rubik Oct 23 '12 at 17:04
  • Because "Python with less features" is not Python, it's a very different language that happens to also be accepted to Python. Yes, assuming it just works is too optimistic. Unless the Numba developers single-handedly did what PyPy did, with a lot of additional constraints, in a lot less time with less manpower, Numba necessarily supports only a tiny subset of Python. I'd say the minimum restrictions is (implicit, easily infer-able) static typing. I would be pleasantly surprised if they supported arbitrary user-defined objects too, but I doubt it. –  Oct 23 '12 at 17:08
  • @delnan: Ok you convinced me! I won't call it Python in future answers! ;) – rubik Oct 25 '12 at 16:30
8

You can try the pypy py3 branch, which is more or less python compatible, but the official CPython implementation has no JIT.

unddoch
  • 5,790
  • 1
  • 24
  • 37
  • thanks, I'm only interested on the official python interpreter for the version 3.x of the language, so i will take this as a _no_ – guz Oct 23 '12 at 16:35
4

This will best be answered by some of the remarkable Python developer folks on this site.

Still I want to comment: When discussing speed of interpreted languages, I just love to point to a project hosted at this location: Computer Language Benchmarks Game

It's a site dedicated to running benchmarks. There are specified tasks to do. Anybody can submit a solution in his/her preferred language and then the tests compare the runtime of each solution. Solutions can be peer reviewed, are often further improved by others, and results are checked against the spec. In the long run this is the most fair benchmarking system to compare different languages.

As you can see from indicative summaries like this one, compiled languages are quite fast compared to interpreted languages. However, the difference is probably not so much in the exact type of compilation, it's the fact that Python (and the others in the graph slower than python) are fully dynamic. Objects can be modified on the fly. Types can be modified on the fly. So some type checking has to be deferred to runtime, instead of compile time.

So while you can argue about compiler benefits, you have to take into account that there are different features in different languages. And those features may come at an intrinsic price.

Finally, when talking about speed: Most often it's not the language and the perceived slowness of a language that's causing the issue, it's a bad algorithm. I never had to switch languages because one was too slow: When there's a speed issue in my code, I fix the algorithm. However, if there are time-consuming, computational intensive loops in your code it is usually worth the while to recompile those. A prominent example are libraries coded in C used by scripting languages (Perl XS libs, or e.g. numpy/scipy for Python, lapack/blas are examples of libs available with bindings for many scripting languages)

cfi
  • 10,915
  • 8
  • 57
  • 103
  • yes but if i just run code from a source.py file i probably can't benefit of this _dynamic-ness_ , also in the exact moment that i will run my code from files you can determine my OS, my platform and what my program will do, which are probably useful informations that can lead to optimization like in the JIT example. – guz Oct 23 '12 at 17:01
  • @igouy: Thanks for pointing it out. I've clarified my response. – cfi Nov 21 '12 at 08:01
  • The project name is shown in the banner on every page, and in the web browser title bar, and those paragraphs shown on the website explain why the project has not been called "shootout" anything for at least 5 years. – igouy Nov 21 '12 at 17:59
  • You _are_ being picky. But maybe rightly so. Changed the name. Be gentle with your critics: The server is still named shootout, and I'm much older than I care to admit here and have been used to that name for years. I do believe the name change is all word-play because in the end it's the context that counts. If people did not understand the merits and the problems with cross-language benchmarks before, they won't understand now. I still corrected this because I do believe in the magic of words and am a nitpicker myself :-) Seriously, thanks for your effort in getting this corrected. – cfi Nov 22 '12 at 09:03
  • Wading through Google search results comprised of porn sites and college mass murder just wasn't a bright happy start to the day for me - so after Virginia Tech I changed the name. – igouy Nov 26 '12 at 20:09
2

If you mean JIT as in Just in time compiler to a Bytecode representation then it has such a feature(since 2.2). If you mean JIT to machine code, then no. Yet the compilation to byte code provides a lot of performance improvement. If you want it to compile to machine code, then Pypy is the implementation you're looking for.

Note: pypy doesn't work with Python 3.x

Aniket Inge
  • 25,375
  • 5
  • 50
  • 78
0

If you are looking for speed improvements in a block of code, then you may want to have a look to rpythonic, that compiles down to C using pypy. It uses a decorator that converts it in a JIT for Python.

jdavid_1385
  • 107
  • 6