After some investigating, the website requires that both Accept
and User-Agent
request headers are set. Otherwise, it returns a 503. This is terribly broken. It also appears to be doing user-agent sniffing. I get a 403 when using curl:
$ curl --head http://www.nativeseeds.org/
HTTP/1.1 403 Forbidden
Date: Wed, 26 Sep 2012 14:54:59 GMT
Server: Apache
P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM"
Set-Cookie: f65129b0cd2c5e10c387f919ac90ad66=PjZxNjvNmn6IlVh4Ac-tH0; path=/
Vary: Accept-Encoding
Content-Type: text/html
but works fine if I set the user-agent to Firefox:
$ curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" --head http://www.nativeseeds.org/
HTTP/1.1 200 OK
Date: Wed, 26 Sep 2012 14:55:57 GMT
Server: Apache
P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM"
Expires: Mon, 1 Jan 2001 00:00:00 GMT
Cache-Control: post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: f65129b0cd2c5e10c387f919ac90ad66=ykOpGnEE%2CQOMUaVJLnM7W0; path=/
Last-Modified: Wed, 26 Sep 2012 14:56:27 GMT
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
It appears to work using the requests module:
>>> import requests
>>> r = requests.head('http://www.nativeseeds.org/')
>>> import pprint; pprint.pprint(r.headers)
{'cache-control': 'post-check=0, pre-check=0',
'content-encoding': 'gzip',
'content-length': '20',
'content-type': 'text/html; charset=utf-8',
'date': 'Wed, 26 Sep 2012 14:58:05 GMT',
'expires': 'Mon, 1 Jan 2001 00:00:00 GMT',
'last-modified': 'Wed, 26 Sep 2012 14:58:09 GMT',
'p3p': 'CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM"',
'pragma': 'no-cache',
'server': 'Apache',
'set-cookie': 'f65129b0cd2c5e10c387f919ac90ad66=2NtRrDBra9jPsszChZXDm2; path=/',
'vary': 'Accept-Encoding'}