6

Background

I have a Django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down.

Problem:

  • Profiling the application gives me time taken by functions.
  • This time increases on high load.
  • Time consumed may be due to complex calculation or for waiting for CPU.

So, how to find the CPU cycles consumed by a piece of code ?

Since reducing the CPU consumption will increase the response time.

  • I might have written extremely efficient code and need to add more CPU power

OR

  • I might have some stupid code taking the CPU and causing the slow down ?

Update

  • I am using Jmeter to profile my web app, it gives me a throughput of 2 requests/sec. [ 100 users]
  • I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request.

More Info

  • Configuration Nginx + Uwsgi with 4 workers
  • No database used, using a responses from a REST API
  • On 1st hit the response of REST API gets cached, therefore doesn't makes a difference.
  • Using ujson for json parsing.

Curious to know:

  • Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools.
  • All those I found were casual snippets of code that perform profiling.
halfer
  • 19,824
  • 17
  • 99
  • 186
Yugal Jindle
  • 44,057
  • 43
  • 129
  • 197

2 Answers2

2

You could try configuring your test to ramp up slowly, slow enough so that you can see the CPU gradually increase and then run the profiler before you hit high CPU. There's no point trying to profile code when the CPU is maxed out because at this point everything will be slow. In fact, you really only need a relatively light load to get useful data from a profiler.

Also, by gradually increasing the load you will be better able to see if there is a gradual increase in CPU (suggesting a CPU bottleneck) or if there is a sudden jump in CPU (suggesting perhaps another type of problem, one that would not necessarily be addressed by more CPU).

Try using something like a Cosntant Throughput Timer to pace the requests, this will prevent JMeter getting carried away and over-loading the system.

Oliver Lloyd
  • 4,936
  • 7
  • 33
  • 55
  • Good advise for a start.. I will explore the options you suggested. Apart from that, I am looking more of an ninja profiling tool for django / python - That can provide a more granular look into the system. :) – Yugal Jindle Jun 06 '12 at 04:56
0

Checkout New Relic for some pretty sweet analytics, they've got django-specific logging.

Jens Alm
  • 3,027
  • 4
  • 22
  • 24