Are there some reasons of using Django with PyPy? I read PyPy increases perfomance.
-
1Real-world performance numbers were discussed on the pypy-dev mailing list recently. I got similar results running a largish site with PyPy 1.7 and psycopg2ct. It seems that currently the ctypes based PostgreSQL drivers (psycopg2ct or pypq) prevent substantial speedups in typical Django apps. Also, you need to take into account the longish warm-up of the JIT. See the thread starting at http://mail.python.org/pipermail/pypy-dev/2011-October/008499.html – akaihola Nov 28 '11 at 08:11
-
The psycopg2cffi project seems to have taken database adaptor performance to a new level. See http://chtd.ru/blog/bystraya-rabota-s-postgres-pod-pypy/?lang=en – akaihola Aug 04 '13 at 18:40
-
For more information about different options for using PostgreSQL with PyPy, see http://stackoverflow.com/a/13663976/15770 – akaihola Aug 04 '13 at 18:41
3 Answers
Unlikely. A Django application is almost always I/O-bound, usually because of the database connection. PyPy wouldn't help with that at all, even if it was purely compatible (which I'm not sure it is).

- 588,541
- 66
- 880
- 895
-
Ok, how can increase perfomance of my Django app? Please, suggest some ways. – Stan Dec 20 '10 at 14:20
-
3@Stanislav Feldman: Profile it to see where the bottlenecks are, then go from there. Post the results of profiling here if you need more help. – Vinay Sajip Dec 20 '10 at 14:28
-
It was a general question. I want to learn basics for the future project. I started project with Grails, but it uses a lot of memory, and I decided to look at Django. – Stan Dec 20 '10 at 14:40
-
5Premature optimization is a bad idea. Before you've even started the project is *very* premature. – Wooble Dec 20 '10 at 14:50
-
6I don't see why "optimizing before starting the project" is a bad idea... After all, you have nothing to lose, and the bad decisions you make in the beginning will only increase maintenance burden afterwards. – Jeeyoung Kim Dec 20 '10 at 20:32
-
@jeeyoung-kim, Most of the time it's a very bad idea. See [is-premature-optimization-really-the-root-of-all-evil](http://stackoverflow.com/questions/211414/is-premature-optimization-really-the-root-of-all-evil). At least it's not as clear cut "nothing to loose" as you make it out to be. – Tim Kersten Mar 23 '11 at 12:35
-
This answer is in conflict with the data in http://speed.pypy.org which shows django running ~x12 times faster on PyPy than on CPython – Jonathan Livni Jul 07 '11 at 08:20
-
3@Jonathan that chart is misleading at best. If you look at the [actual benchmark for Django](http://codespeak.net/svn/pypy/benchmarks/unladen_swallow/performance/bm_django.py), it only tests template rendering: which is indeed CPU-dependent, but only a small part of the full request cycle. My comment stands. – Daniel Roseman Jul 07 '11 at 08:26
-
2Template rendering is the hardest part in request cycle, assuming your queries are fast enough (in my case all queries complete in less than 10ms, but template rendering is about 100-200ms) – Ivan Virabyan Aug 02 '11 at 15:54
-
1@Ivan: are you sure? Don't forget that querysets are lazy, which means that most of the time they won't be evaluated until the template is being rendered. – Daniel Roseman Aug 02 '11 at 16:06
-
1Yes, I'm sure, I know about laziness of querysets, I profiled my application a lot, and found out that it is a templating system. I switched to jinja2, which made my application 2x times faster. You can easily check this by yourself. Create a simple view, which shows a simple list of posts. On my machine the page is loaded in 28ms. I converted queryset to a list before rendering the template. So, the time spent in the template is 20 ms. The query itself executed by dbengine in 1ms. About 6ms spent in the ORM code. And about 1ms is in the other django code. – Ivan Virabyan Aug 02 '11 at 16:30
-
So, orm is also slow part if you do lots of queries. Even if each query is fast (<1ms) and total time of their execution is less than 10ms, total time of executing them using orm may be 3-10times slower. This can be easily seen, if you are not using select_related, while using related fields on the model in loops. – Ivan Virabyan Aug 02 '11 at 16:36
Depends.
PyPy does improve performance for all benchmarks that are in the PyPy's benchmark suite. This is only template rendering for now, but noone submitted anything else. It's however safe to assume that performance critical code will be faster (especially after some tuning).
Compatibility-wise databases are a bit of an issue, because only sqlite is working and it's slow (there is a branch to fix it though). People also reported pg8000 working with sqlalchemy for example, but I don't have a first-hand experience.
Cheers, fijal

- 3,190
- 18
- 21
I have done some experimentation with PyPy + Django. There are two main issues:
Most database adaptors and other third-party modules cannot be compiled with PyPy (even when the wiki says they can).
One server I thought might benefit from JIT compilation because it did a fancy calculation in some requests had an increasing memory footprint, perhaps because the JIT was storing traces that turned out to be unique to each request so were never reused?
Theoretically PyPy might be a win if your server is doing interesting calculations, uses pure-python modules, and has large numbers of objects in-memory (because PyPy can reduce the memory used per-object in some circumstances). Otherwise the higher memory requirements of the JIT will be an impediment because it reduces opportunities for in-memory caching and may require extra servers to run enough server processes.

- 2,314
- 20
- 28