0

I am trying to log in to a website using python and the requests module.

I am doing:

import requests
payload = {
    'login_Email': 'xxxxx@gmail.com',
    'login_Password': 'xxxxx'
}

with requests.Session() as s:
    p = s.post('https://www.auction4cars.com/', data=payload)
    print p.text

The problem is, that the output of this just seems to be the login page and not the page AFTER log in. I.e. the page says 'welcome guest' and 'please enter your username and password' etc.

I was expecting it to return the page saying something like 'thanks for logging in xxxxx' etc.

Can anyone suggest what I'm doing wrong?


EDIT:

I don't think my question is a duplicate of How to "log in" to a website using Python's Requests module? because I am using the script from the most popular answer (regarded on the thread as the answer that should be the accepted one).

I have also tried the accepted one, but my problem remains.


EDIT:

I confused about whether I need to do something with cookies. The URLs that I am trying to visit after logging in don't seem to contain cookie values.


EDIT:

This seems to be a similar problem to mine:

get restricted page after login using requests,urllib2 python

However, I don't see other inputs that I need to fill out. Except:

Do I need to do anything with the submit button?

user1551817
  • 6,693
  • 22
  • 72
  • 109
  • when you login to such website they expecting cookies along with username and password. so best way is to use scrapping. – Shanteshwar Inde Dec 24 '18 at 12:15
  • Thank you. Could you provide a little more information? I am scraping info after I log in, but I'm not sure what you mean by using scraping to log in. – user1551817 Dec 24 '18 at 12:25
  • you can also try setting header in request. it may works. – Shanteshwar Inde Dec 24 '18 at 12:37
  • Possible duplicate of [How to "log in" to a website using Python's Requests module?](https://stackoverflow.com/questions/11892729/how-to-log-in-to-a-website-using-pythons-requests-module) – stovfl Dec 24 '18 at 12:53
  • 1
    The single best thing you should do is watch a succesful browser login using a web traffic snooper like Telerik Fiddler, then replicate the actions (GET/POST/whatever) with whichever headers/settings are needed In your setup of requests actions. It will be a suck it and see method - websites vary what they need. As long as you use a (the same) requests session for login and all operations afterwards cookies will just work, so it is headers and payload that you need to worry about. – DisappointedByUnaccountableMod Dec 24 '18 at 16:52
  • 1
    Also, don’t just copy the POST, do the GET of the login page before that - this sometimes gives the client cookies which the login post expects to get back (although by using a requests session you don’t have to worry about the details) – DisappointedByUnaccountableMod Dec 24 '18 at 16:59

1 Answers1

0

I was helped with the problem here:

Replicate browser actions with a python script using Fiddler

Thank you for any input.

user1551817
  • 6,693
  • 22
  • 72
  • 109