1

Lets ay you are building a analytics program for your site, in which you log the address of the page every time you visit it, so your log.txt could be

x.com/a
x.com/b
x.com/a
x.com/c
x.com/a

There is no counter, its just a log file and no sql is used, given that this had thousands of elements your a thousand or so unique domains addresses (x.com/a x.com/b), what's the most efficient way of going though this list and spitting out the top 10 urls.

My best solution would go through the log file, then if that domain does not exist in hashtable add it in as key, and increment it's value; then search on hash for largest 10 values.

I'm not convinced this is the best solution, not only because of the space complexity (what happens if the unique domain went from a few thousands to few million), and because I would need to conduct another search on hashtable to find the largest values.

Saad
  • 26,316
  • 15
  • 48
  • 69

3 Answers3

2

Even for few thousands or few millions entries - your approach is just fine - it has a linear on average (O(n)) run time - so it is not that bad.

However, you can use a map-reduce approach if you want more scalability.

map(file):
   for each entry in file:
         emitIntermediate(getDomainName(entry),"1"))
reduce(domain,list):
   emit(domain,size(list))

The above will efficiently give you the list of (domain,count) tupples, and all you have to do is select the top 10.

Selecting the top 10 can be done using a map-reduce (distributed) sort for scalability - or using a min heap (iterate while maintaining the top 10 elements encountered in the heap). The second is explained with more details in this thread


About space complexity: If you are using a 64 bit system, you can use it as RAM, and let the OS do what it cans (by swapping elements to disk when needed), it is very unlikely that you will need more then the amount of Virtual Memory you have on a 64 bits machine. An alternative is to use a hash table (or a B+ tree) optimized for file systems and do the same algorithm on it.


However, if this is indeed the case - and the data does not fit RAM, and you cannot use map-reduce, I suspect sorting and iterating - though will be O(nlogn) will be more efficient (using external sort) - because the number of DISK ACCESSES will be minimized, and disk access is much slower then RAM access.

Community
  • 1
  • 1
amit
  • 175,853
  • 27
  • 231
  • 333
1

I'd suggest that inspecting the file each time is the wrong way to go. A better solution might be to parse the file and push the data into a database (ravendb, or another no-sql should be the easiest). Once there, querying the data becomes trivial, even with very large amounts of data.

ilivewithian
  • 19,476
  • 19
  • 103
  • 165
1

Don't reinvent the wheel. Coreutils' sort and uniq can process your log file

sort log.txt | uniq -c | sort -n -r

Coreutils are available on *nix systems and have been ported to Windows.

If you do have a need to roll up this processing in your own code, consult your language's available libraries for its version of a multiset. Python's, for example, is the Counter class, which will happily tell you the most_common([n]).

A. Webb
  • 26,227
  • 1
  • 63
  • 95