-2

I'd like to use javascript to scrape a webpage and collect all links ending in .jpg into an array. Ideally, it would do this every five minutes. Is this possible in a client-side language like Javascript? Thanks.

meh9
  • 1
  • 2
  • Is there a reason you want to use client-side JS? You can use server-side JS a la node.js. Check out @jeznag's answer below. – jswebb Jul 03 '17 at 03:10
  • I was just hoping to accomplish this without having to download and install anything. – meh9 Jul 03 '17 at 03:21
  • Possible duplicate of [Browser-based client-side scraping](https://stackoverflow.com/questions/31581051/browser-based-client-side-scraping) – shaochuancs Jul 03 '17 at 05:48

1 Answers1

2

Why does it need to be javascript? BTW JS isn't just a client side language. You can use Node JS e.g. https://github.com/rchipka/node-osmosis

osmosis
.get('www.craigslist.org/about/sites')
.find('a[href*="jpg"]')
jeznag
  • 4,183
  • 7
  • 35
  • 53