2

I'm trying to create a python SCRIPT that is capable of downloading a video stream. I can find the URL from the DevMode manually but I don't know how I could automate this finding process. The URL itself is not in the source of the webpage, a javascript file initiates a request to get the files.

This is what I can see in Google Chrome DevTools.

I tried to work with "requests" but it appears to be mainly for sending requests, not catching ones that a site makes. Should I use something that simulates a browser, like selenium? What is the main technique used for this issue? Thank you in advance.

Xantium
  • 11,201
  • 10
  • 62
  • 89

1 Answers1

0

In such cases you would need to go for a headless browser. These are really the only things that can parse JavaScript (unless you want to do it yourself with regex, which is inadvisable in any case) and have can find the file reliably.

To monitor network as your browser does you would first need all the urls and monitoring the size of all the pages received. So you will need to parse first.

If the video url is complete and is not well hidden in the JavaScript then Regex to find urls in string in Python could be used.

You would use requests to actually send forms, send get and post requests and getting the content of the url

Xantium
  • 11,201
  • 10
  • 62
  • 89