6

I was trying to do this,

import requests
s=requests.Session()
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data)
print ra.content
print ra.headers
ans = dict(answer='5')
f=s.cookies
r=s.post('http://example/level1.php',data=ans,cookies=f)
print r.content

But the second post request returns a 404 error, can someone help me why ?

Mayank Jha
  • 69
  • 1
  • 1
  • 2

1 Answers1

6

In the latest version of requests, the sessions object comes equipped with Cookie Persistence, look at the requests Sessions ojbects docs. So you don't need add the cookie artificially. Just

import requests
s=requests.Session()
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans)
print r.content

Just print the cookie to look up wheather you were logged.

for cookie in s.cookies:
    print (cookie.name, cookie.value)

And is the example site is yours?
If not maybe the site reject the bot/crawler !
And you can change your requests's user-agent as looks likes you are using a browser.


For example:

import requests
s=requests.Session()
headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.62 Safari/537.36'
}
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data, headers=headers)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans, headers = headers)
print r.content
evandrix
  • 6,041
  • 4
  • 27
  • 38
atupal
  • 16,404
  • 5
  • 31
  • 42
  • 2
    Doesn't adding the user-agent headers override the headers which contain the logged in cookies? When I do this, I get a page, but it's not logged in... – David Callanan Apr 10 '17 at 10:39