41

Do you know any large dataset to experiment with Hadoop which is free/low cost? Any pointers/links related are appreciated.

Preference:

  • At least one GB of data.

  • Production log data of webserver.

Few of them which I found so far:

  1. Wikipedia dump

  2. http://wiki.freebase.com/wiki/Data_dumps

  3. http://aws.amazon.com/publicdatasets/

Also can we run our own crawler to gather data from sites e.g. Wikipedia? Any pointers on how to do this is appreciated as well.

Tiago Martins Peres
  • 14,289
  • 18
  • 86
  • 145
Sundar
  • 1,204
  • 1
  • 14
  • 17
  • datanami recently posted this list of links: http://www.datanami.com/2015/01/29/9-places-get-big-data-now/ - perhaps someone has time to convert this to a proper answer. – Nickolay Feb 02 '15 at 23:03

4 Answers4

11

Few points about your question regarding crawling and wikipedia.

You have linked to the wikipedia data dumps and you can use the Cloud9 project from UMD to work with this data in Hadoop.

They have a page on this: Working with Wikipedia

Another datasource to add to the list is:

  • ClueWeb09 - 1 billion webpages collected between Jan and Feb 09. 5TB Compressed.

Using a crawler to generate data should be posted in a separate question to one about Hadoop/MapReduce I would say.

Binary Nerd
  • 13,872
  • 4
  • 42
  • 44
  • 2
    link "Working with Wikipedia" is dead. is this replacement http://lintool.github.com/Cloud9/docs/content/wikipedia.html ? – f13o Aug 31 '12 at 16:10
  • link for ClueWeb09 is dead. New link seems to be http://lemurproject.org/clueweb09/. It looks you need to pay for the data. – user3282611 Aug 12 '19 at 15:39
10

One obvious source: the Stack Overflow trilogy data dumps. These are freely available under the Creative Commons license.

APC
  • 144,005
  • 19
  • 170
  • 281
  • @toddlermenot - the Dumps are now hosted on the Internet Archive. I've updated the link. Read the reason why it changed [on this SE Blog page](https://blog.stackexchange.com/2014/01/stack-exchange-cc-data-now-hosted-by-the-internet-archive/). – APC Aug 09 '15 at 09:42
7

This is a collection of 189 datasets for machine learning (which is one of the nicest applications for hadoop g): http://archive.ics.uci.edu/ml/datasets.html

Peter Wippermann
  • 4,125
  • 5
  • 35
  • 48
6

It's no log file but maybe you could use the planet file from OpenStreetMap: http://wiki.openstreetmap.org/wiki/Planet.osm

CC licence, about 160 GB (unpacked)

There are also smaller files for each continent: http://wiki.openstreetmap.org/wiki/World

Olvagor
  • 2,332
  • 5
  • 25
  • 26