Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
338
votes
3 answers

Sending "User-agent" using Requests library in Python

I want to send a value for "User-agent" while requesting a webpage using Python Requests. I am not sure is if it is okay to send this as a part of the header, as in the code below: debug = {'verbose': sys.stderr} user_agent = {'User-agent':…
user1289853
241
votes
5 answers

How to request Google to re-crawl my website?

Does someone know a way to request Google to re-crawl a website? If possible, this shouldn't last months. My site is showing an old title in Google's search results. How can I show it with the correct title and description?
Manish Shrivastava
  • 30,617
  • 13
  • 97
  • 101
225
votes
12 answers

Finding the layers and layer sizes for each Docker image

For research purposes I'm trying to crawl the public Docker registry ( https://registry.hub.docker.com/ ) and find out 1) how many layers an average image has and 2) the sizes of these layers to get an idea of the distribution. However I studied the…
user134589
  • 2,499
  • 2
  • 16
  • 12
171
votes
4 answers

keep rsync from removing unfinished source files

I have two machines, speed and mass. speed has a fast Internet connection and is running a crawler which downloads a lot of files to disk. mass has a lot of disk space. I want to move the files from speed to mass after they're done downloading.…
aaronsw
  • 4,455
  • 5
  • 31
  • 27
171
votes
3 answers

TypeError: can't use a string pattern on a bytes-like object in re.findall()

I am trying to learn how to automatically fetch urls from a page. In the following code I am trying to get the title of the webpage: import urllib.request import re url = "http://www.google.com" regex = r'(,+?)' pattern =…
Inspired_Blue
  • 2,308
  • 3
  • 15
  • 21
161
votes
9 answers

Difference between BeautifulSoup and Scrapy crawler?

I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Nishant Bhakta
  • 2,897
  • 3
  • 21
  • 24
144
votes
19 answers

how to detect search engine bots with php?

How can one detect the search engine bots using php?
terrific
  • 1,637
  • 3
  • 15
  • 14
135
votes
5 answers

How to find all links / pages on a website

Is it possible to find all the pages and links on ANY given website? I'd like to enter a URL and produce a directory tree of all links from that site? I've looked at HTTrack but that downloads the whole site and I simply need the directory tree.
Jonathan Lyon
  • 3,862
  • 7
  • 39
  • 52
125
votes
5 answers

How to pass a user defined argument in scrapy spider

I am trying to pass a user defined argument to a scrapy's spider. Can anyone suggest on how to do that? I read about a parameter -a somewhere but have no idea how to use it.
L Lawliet
  • 2,565
  • 4
  • 26
  • 35
118
votes
8 answers

Get a list of URLs from a site

I'm deploying a replacement site for a client but they don't want all their old pages to end in 404s. Keeping the old URL structure wasn't possible because it was hideous. So I'm writing a 404 handler that should look for an old page being requested…
Oli
  • 235,628
  • 64
  • 220
  • 299
112
votes
11 answers

Detecting 'stealth' web-crawlers

What options are there to detect web-crawlers that do not want to be detected? (I know that listing detection techniques will allow the smart stealth-crawler programmer to make a better spider, but I do not think that we will ever be able to block…
Jacco
  • 23,534
  • 17
  • 88
  • 105
109
votes
12 answers

Hide Email Address from Bots - Keep mailto:

tl;dr Hide email address from bots without using scripts and maintain mailto: functionality. Method must also support screen-readers. Summary Email obfuscation without using scripts or contact forms Email address needs to be completely visible to…
user7234396
102
votes
6 answers

What is the difference between web-crawling and web-scraping?

Is there a difference between Crawling and Web-scraping? If there's a difference, what's the best method to use in order to collect some web data to supply a database for later use in a customised search engine?
wassimans
  • 8,382
  • 10
  • 47
  • 58
101
votes
11 answers

How can I use different pipelines for different spiders in a single Scrapy project

I have a scrapy project which contains multiple spiders. Is there any way I can define which pipelines to use for which spider? Not all the pipelines i have defined are applicable for every spider. Thanks
CodeMonkeyB
  • 2,970
  • 4
  • 22
  • 29
86
votes
8 answers

How to run Scrapy from within a Python script

I'm new to Scrapy and I'm looking for a way to run it from a Python script. I found 2 sources that explain this: http://tryolabs.com/Blog/2011/09/27/calling-scrapy-python-script/ http://snipplr.com/view/67006/using-scrapy-from-a-script/ I can't…
user47954
  • 969
  • 1
  • 7
  • 4
1
2 3
99 100