0

I am scraping www.marriot.com for information on their hotels and prices. I used the chrome inspect tool to monitor network traffic to figure out what API endpoint marriot is using.

This is the request I am trying to emulate:

http://www.marriott.com/reservation/availabilitySearch.mi?propertyCode=TYSMC&isSearch=true&fromDate=02/23/17&toDate=02/24/17&numberOfRooms=1&numberOfGuests=1&numberOfChildren=0&numberOfAdults=1

With my python code:

import requests
from bs4 import BeautifulSoup

base_uri = 'https://www.marriott.com'
availability_search_ext = '/reservation/availabilitySearch.mi'

rate_params = {
   'propertyCode': 'TYSMC',
   'isSearch': 'true',
   'fromDate': '03/01/17',
   'toDate': '03/02/17',
   'numberOfRooms': '1',
   'numberOfGuests': '1',
   'numberOfChildren': '0',
   'numberOfAdults': '1'
}

def get_rates(sess):
    first_resp = sess.get(base_uri + availability_search_ext, params=rate_params)
    soup = BeautifulSoup(first_resp.content, 'html.parser')
    print soup.title

if __name__ == "__main__":
    with requests.Session() as sess:
        #get_hotels(sess)
        get_rates(sess)

However, I get this result:

<!DOCTYPE doctype html>

<html>
<head><script src="/common/js/marriottCommon.js" type="text/javascript"> </script>
<meta charset="utf-8">
</meta></head>
<body>
<script>
        var xhttp = new XMLHttpRequest();
        xhttp.addEventListener("load", function(a,b,c){
          window.location.reload()
        });
        xhttp.open('GET', '/reservation/availabilitySearch.mi?istl_enable=true&istl_data', true);
        xhttp.send();
      </script>
</body>
</html>

It seems they are trying to prevent bots from scraping their data so they send back a script that reloads the page, sends an XHR request, and then hits this endpoint http://www.marriott.com/reservation/rateListMenu.mi to get render the webpage.

So I tried emulating the behavior of the javascript that is returned by changing my python code to this:

rate_list_ext = '/reservation/rateListMenu.mi'    
xhr_params = {
    'istl_enable': 'true',
    'istl_data': ''
}

def get_rates(sess):
    first_resp = sess.get(base_uri + availability_search_ext,
                          params=rate_params)
    rate_xhr_resp = sess.get(base_uri + availability_search_ext,
                             params=xhr_params)
    rate_list_resp = sess.get(base_uri + rate_list_ext)
    soup = BeautifulSoup(rate_list_resp.content, 'html.parser')

I am making the initial get request with all the parameters, then I make the xhr request that the script is making, and then I make a request to the rateListMenu.mi endpoint to try to get the final html page but I get a session timed out response.

I even made a persistent session with the requests library to store any cookies that the website is returning after reading: Different web site response with RoboBrowser

What am I doing wrong?

Community
  • 1
  • 1
Chirag
  • 446
  • 2
  • 14
  • Have you tried including headers and user-agents? – B.Adler Feb 23 '17 at 15:26
  • No, I have not tried that. Which headers should I be adding? I think cookies may be an issue but since the entire request is actually a series of get requests, I do not know which headers to add and where. – Chirag Feb 23 '17 at 17:59

1 Answers1

1

When the javascript is making the get requests it is including headers. If you include a lot of these headers your requests should get similar return values.

example:

headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.100 Safari/537.36"}

sess.get(base_uri + availability_search_ext, params=rate_params, headers=headers)
B.Adler
  • 1,499
  • 1
  • 18
  • 26