2

For my day job, I have been tasked with setting up a computer system to run calculations on a large database of strings. I have establish a proof of concept, but don't have the low-level knowledge to optimize the hardware and software environment. I was hoping for some guidance on this aspect.

Setup:

  • 100,000 records in a database containing strings
  • I will be performing string similarity calculations to look for approximate duplicates
    • i.e. each string against every other string, so ~5 billion calculations
  • I wrote the proof of concept in Ruby using SQLite3 as the database using 1000 sample rows
  • The total job should run in under a few days - the faster the better, but with diminishing returns. This is a one-time pass, so I don't need a supercomputer if a desktop setup can do it within a few days

What I'm Looking For:

  • If I'm building a custom box to run this job (and potentially future jobs of a similar nature), what hardware should I focus on optimizing? I.e. should I spend my limited budget on a very fast GPU? CPU? Large amounts of RAM? I don't know Ruby on a low enough level to know where the bottlenecks for this type of operation are
  • Am I missing a better approach? I won't get approval for any major purchases of software or expensive hardware, at least until I can prove this method works with this run through. But can anyone suggest a more efficient method of detecting inexact duplicates?
Community
  • 1
  • 1
Dave W.
  • 1,576
  • 2
  • 18
  • 29
  • How long are the strings? And what are they? Is it just a flat list, or do the strings have other associated properties? – glenn mcdonald Jul 22 '11 at 12:25
  • Its actually multiple datasets. Each dataset contains a flat list of 100k strings of similar length, but each dataset has a different average length for all the strings contained in it. E.g.: Dataset A = 100k strings approximately 8-15 chars, Dataset B = 100k strings approximately 30-50 chars, etc. – Dave W. Jul 22 '11 at 13:18

3 Answers3

4

First off, 100,000 strings don't really qualify as a large dataset nowadays, so don't worry too much about the hardware. Here are some suggestions from my previous job (related to search and machine translation) and the current one where I deal with several 100k to millions of XML records all the time:

  • You want RAM. Lots of it.
  • As Soren said, you want to make sure your algorithm is good.
  • Choose your DB wisely. Postgres for example has excellent string functions and doing certain things directly in the DB can be very fast. Have I said you want a lot of RAM?
  • Your job sounds like it it would be fairly easy to partition into smaller subtasks which can be tackled in parallel. If that's really the case you might want to look at MapReduce. In the previous job we had pretty good workstations (4 cores, 8 GB of RAM) which were never turned off, so we turned some of them into a Hadoop Cluster that would do useful stuff. Since the machines were quite overpowered for everyday work use anyway, the users didn't even notice. It's usually not that difficult to turn something into a MapReduce job and the other advantage would be that you can keep the setup around for similar tasks in the future.
  • As for Ruby specific bottle necks, the biggest one in MRI is usually garbage collection, which thanks to its stop-the-world nature is super slow. When we profile this regularly turns out to be a problem. See why's article The fully upturned bin for details on Ruby GC. If you are set on using Ruby, you might want to compare MRI to JRuby, from my experience with the latter and profilers like JVisualVM I wouldn't be surprised if JRuby fared better.
Michael Kohl
  • 66,324
  • 14
  • 138
  • 158
2

The total job should run in under a few days...
This is a one-time pass...
Am I missing a better approach...

If this is a one-off task, You should really just run this on Amazon -- Get an Extra Large (4Core, 15GB RAM) machine for a few hours, and just run it there.

Soren
  • 14,402
  • 4
  • 41
  • 67
  • I wish, but the data is sensitive, so I doubt my boss will go for a cloud compute option. Good advice though! – Dave W. Aug 02 '11 at 19:16
1

Your algo for string similarity is much more important than your hardware spec.

The key question on algos for string similarity is "when do you expect string to be similar?" Do you consider substrings, spelling errors, phonetics, typing errors.

This SO link have a great discussion on algos. 100,000 records is really very little data (in my world) but for ease of implementation, once you have a good algo, you should try to get as much RAM as possible. Doing it in Ruby may not be the best choice for a performance perspective either.

Community
  • 1
  • 1
Soren
  • 14,402
  • 4
  • 41
  • 67
  • I agree that my algo is very important (and I'll be doing my best to optimize it), but running a test run on my work laptop (Thinkpad, not exactly a powerhouse) vs. my personal laptop (Macbook Pro Core Duo 2.8 GHz with 8 GB of RAM) showed an estimated run time of 1 month vs. 6 days. So at least from a hardware standpoint, my question was trying to get at which piece of the hardware puzzle is most important? Just CPU and RAM? Does the GPU play a role? – Dave W. Jul 22 '11 at 12:29
  • 1
    GPU should not play any role to what you are doing. RAM is more important than CPU -- Throw as much RAM in there as you can, that will make a factor 10 difference if your data is large. CPU will at best make a factor 2 difference. If your dataset gets too large to fit in memory, then consider better disk configuration (faster disk and/or multiple disks with striping, or SSD) – Soren Jul 22 '11 at 12:57
  • Good to know! That is the kind of info I was looking for! And btw, the link you provided - that is the algo I am using, the pair distance. – Dave W. Jul 22 '11 at 13:21