7

I am familiar with how to use the Google Chrome Web Inspector to manually save a webpage as a HAR file with the content. I would like to automate this.

In my searches for tools to automate the generation of a HAR file, I have found some solutions, but none of them save the content of the resources.

I have tried the following without any luck:

Getting the content of the page you requested (the raw HTML) is doable, but getting the content of every other network resource that loads (CSS, javascript, images, etc) is what my problem is.

Teddy
  • 18,357
  • 2
  • 30
  • 42
  • Did you find a way to do this? – Monodeep Jan 17 '15 at 16:00
  • @Monodeep I never found a solution for this – Teddy Jan 18 '15 at 16:48
  • Thanks for the reply . I found a solution and i am using it successfully . It is using Selenium, Firebug & NetExport (Firefox Extensions). If you still need it I can post the code here (i have written it in python) – Monodeep Feb 22 '15 at 10:38
  • FYI [chrome-har-capturer](https://github.com/cyrus-and/chrome-har-capturer) does that: `--content` option. – cYrus Jun 09 '16 at 10:26

3 Answers3

6

I think the most reliable way to automate generating HAR is using BrowsermobProxy along with chromedriver and Selenium.

Here is a script in python to programatically generate HAR file which can be integrated in your development cycle. It also captures content.

from browsermobproxy import Server
from selenium import webdriver
import os
import json
import urlparse

server = Server("path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()

chromedriver = "path/to/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
url = urlparse.urlparse (proxy.proxy).path
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--proxy-server={0}".format(url))
driver = webdriver.Chrome(chromedriver,chrome_options =chrome_options)
proxy.new_har("http://stackoverflow.com", options={'captureHeaders': True,'captureContent':True})
driver.get("http://stackoverflow.com")    
result = json.dumps(proxy.har, ensure_ascii=False)
print result
proxy.stop()    
driver.quit()

You can also checkout this tool which generates HAR and NavigationTiming data from both Chrome and Firefox headlessly: Speedprofile

Paras Dahal
  • 675
  • 7
  • 12
  • Thanks! Haven't had a chance to test this, but it looks promising. – Teddy Aug 17 '15 at 14:35
  • 1
    I have observed that using proxies leads to larger than usual timings. Is there a workaround for getting a HAR with correct timings as they usually would be without using a proxy? – vishalg Jun 07 '16 at 14:23
  • 1
    The above doesn't seem to work for headless chrome. So, if I provide chrome_options.add_argument("--headless") the generated json doesn't contain all the HTTP requests. – Punit S Dec 12 '17 at 04:22
3

You might take a look at phantomjs, it looks like it exports as HAR http://phantomjs.org/network-monitoring.html

Pete
  • 1,305
  • 1
  • 12
  • 36
1

You can use an http proxy to save the contents. On windows, you can use the free fiddler. On Mac and Linux, you can use Charles Proxy but it's not free.

This is a screenshot from Fiddler, and you can choose to save the requests in all their glory, including headers.

enter image description here

I-Lin Kuo
  • 3,220
  • 2
  • 18
  • 25