Use requests
just for its pure simplicity. There's an informative gist on Github that compares logging in to an authenticated resource using urllib2
vs. requests
. If you're working with JSON responses, requests
can easily translate the response directly into a Python dict:
r = requests.get("http://example.com/api/query?param=value¶m2=value2", auth=(user, passwd))
results_dict = r.json()
Simple as that - no extra json
import to deal with, no dump
ing and load
ing, etc. Just get the data, translate it to Python, done.
urllib
and urllib2
are just not very convenient. You have to build requests and handlers, set up authentication managers, take care of a lot of nitty-gritty that you shouldn't have to. http.client
is even lower-level - its contents are used by the urllib
s to do their thing, and aren't often accessed directly. Requests is getting more and more feature-ful by the day, all with the overarching principle to make things as easy to do as possible, yet allow as much customization as needed if your requirements are out of the ordinary. It has a very active development and user community, so if you need something done, chances are others do too, and with their short release schedules you may see a patch out before too long.
So, if you're mainly just going to be consuming web services, requests
is an easy choice. And, on the off-chance that you really can't do something with it, the others are there in the standard lib to back you up just in case.