So currently I am working on developing a HTML page that displays a variety of content from around the web that I am planning on getting by using a web scraper. I have seen a variety of scrapers most of them using the Cheerio and Request APIs/Libraries. However all of these tutorials(such as:http://www.netinstructions.com/simple-web-scraping-with-node-js-and-javascript/ ) utilize Node.js rather than just a HTML file and .js files. I have no interest in using node.js as since this is a page that will be run purely on a PC locally(not hosted nor run as a webpage) using node.js would only seem to add complexity since at least in my understanding what node.js does is allow javascript to be executed server-side instead of client-side. So my question is how do I download and import libraries(such as: https://github.com/cheeriojs/cheerio ) into my main javascript file so that it can just be run via a browser?
Edit: Even if node.js is not just for server side my question stands. Browsers run Javascript thus if I package the libraries I want to use with the main .js and reference them it will work there without node.js. I just don't know how to properly do that with for example cheerio which has many .js files. Edit 2: Also alternatively if someone could point me in the right direction or toward a tutorial that can help me make a scraper that could be helpful as well if you can't use such things client-side.