I have been using the requests
library to mine this website. I haven't made too many requests to it within 10 minutes. Say 25. All of a sudden, the website gives me a 404 error.
My question is: I read somewhere that getting a URL with a browser is different from getting a URL with something like a requests
. Because the requests
fetch does not get cookies and other things that a browser would. Is there an option in requests
to emulate a browser so the server doesn't think i'm a bot? Or is this not an issue?