I am scraping www.marriot.com for information on their hotels and prices. I used the chrome inspect tool to monitor network traffic to figure out what API endpoint marriot is using.
This is the request I am trying to emulate:
With my python code:
import requests
from bs4 import BeautifulSoup
base_uri = 'https://www.marriott.com'
availability_search_ext = '/reservation/availabilitySearch.mi'
rate_params = {
'propertyCode': 'TYSMC',
'isSearch': 'true',
'fromDate': '03/01/17',
'toDate': '03/02/17',
'numberOfRooms': '1',
'numberOfGuests': '1',
'numberOfChildren': '0',
'numberOfAdults': '1'
}
def get_rates(sess):
first_resp = sess.get(base_uri + availability_search_ext, params=rate_params)
soup = BeautifulSoup(first_resp.content, 'html.parser')
print soup.title
if __name__ == "__main__":
with requests.Session() as sess:
#get_hotels(sess)
get_rates(sess)
However, I get this result:
<!DOCTYPE doctype html>
<html>
<head><script src="/common/js/marriottCommon.js" type="text/javascript"> </script>
<meta charset="utf-8">
</meta></head>
<body>
<script>
var xhttp = new XMLHttpRequest();
xhttp.addEventListener("load", function(a,b,c){
window.location.reload()
});
xhttp.open('GET', '/reservation/availabilitySearch.mi?istl_enable=true&istl_data', true);
xhttp.send();
</script>
</body>
</html>
It seems they are trying to prevent bots from scraping their data so they send back a script that reloads the page, sends an XHR request, and then hits this endpoint http://www.marriott.com/reservation/rateListMenu.mi
to get render the webpage.
So I tried emulating the behavior of the javascript that is returned by changing my python code to this:
rate_list_ext = '/reservation/rateListMenu.mi'
xhr_params = {
'istl_enable': 'true',
'istl_data': ''
}
def get_rates(sess):
first_resp = sess.get(base_uri + availability_search_ext,
params=rate_params)
rate_xhr_resp = sess.get(base_uri + availability_search_ext,
params=xhr_params)
rate_list_resp = sess.get(base_uri + rate_list_ext)
soup = BeautifulSoup(rate_list_resp.content, 'html.parser')
I am making the initial get request with all the parameters, then I make the xhr request that the script is making, and then I make a request to the rateListMenu.mi endpoint to try to get the final html page but I get a session timed out response.
I even made a persistent session with the requests library to store any cookies that the website is returning after reading: Different web site response with RoboBrowser
What am I doing wrong?