5

I'm working on implementing a histogram, and one of the key points is the quick merging of histogram bins. Because I don't have a priori knowledge of the data set being approximated by the histogram, I need to come up with a way that quickly merges neighboring bins, after I've exceeded the maximum number of bins.

So, as an example, if you're approximating a data stream 23, 19, 10, 16, 36, 2, 9, 32, 30, 45 with five histogram bins, you'd read in the first five elements, obtaining:

(23, 1), (19,1), (10,1), (16,1), (36,1)

Adding the bin (2,1) causes an issue, since we've exceeded the maximum number of bins. So, we add (2,1) and merge the two closest bins -- (16,1) and (19,1) -- to get a new bin (17.5,2) that replaces those two.

Repeating this method for the rest of the histogram gives us the final output:

(2,1), (9.5,2), (19.33,3), (32.67,3), (45,1).

Implementing this without respect for complexity issues is trivial. However, I'm really concerned about optimizing this for large data sets, because my "trivial" implementation ends up taking 15 seconds to run on a stream of 100,000 gaussian distributed values.

My current thought is to use boost::multi_index to keep track of my HistogramBin struct, which is defined as:

struct HistogramBin
{
    double bin;
    unsigned long count;
    bool isNull;

    HistogramBin(double x, bool n = false)
    : bin(x), count(1), isNull(n) {}

    bool operator<(const HistogramBin &other) const
    { return (bin < other.bin); }

    // Merges other with this histogram bin
    // E.g., if you have (2.0,1) and (3.0,2), you'd merge them into (2.67,3)
    void merge(const HistogramBin &other)
    {
        unsigned long old_count = count;
        count += other.count;
        bin = (bin*old_count + other.bin*other.count)/count;
    }

    // Gets the difference between two histogram bins
    const double getDifference(const HistogramBin &other) const
    { return (double)abs(bin - other.bin); }
};

So, the multi_index would use ordered_unique<> to sort on HistogramBin::bin.

Now, this doesn't resolve the issue of sorting the bins by the differences between neighboring bins. Indexing by HistogramBin::bin gives us an ordered list of HistogramBin objects, but then the next step is to calculate the difference between the current bin and the next one, and then to sort on those values as well.

Is there any way to sort on those values, while maintaining the integrity of the list, and without introducing a new container (such as a multimap of difference/iterator key/value pairs)?

Maintaining this list is my current idea on a near-optimal solution to the complexity problem, because it only needs to be changed when there's a merge, and a merge only happens when a new value is added.

Any thoughts or insight would be appreciated.

kmore
  • 924
  • 9
  • 18
  • What properties should your bins have? I think it could be possible to sort the incoming values and then set the limits of the bins according to your criteria. – Nobody moving away from SE Jul 27 '11 at 17:12
  • It is not clear what does it mean to sort a container by the difference between neighbouring elements. Perhaps an example woild clarify it. – n. m. could be an AI Jul 27 '11 at 17:12
  • @n.m. I tried to give the example in the question. By sorting by the difference between neighboring elements, I mean that post-sorting, the "sorted difference map" would resemble: (6, (10,1), (16,1)), (3, (16,1), (19,1)), ..., and that's the vector you'd want to sort--on the differences. That's what tells us to merge (16,1) and (19,1) into (17.5,2). – kmore Jul 27 '11 at 17:18
  • @Nobody the bins' properties are limitless. They're non-uniform in width (meaning they can take on any value that a double can take), and each bin can hold up to the limit of an unsigned long. – kmore Jul 27 '11 at 17:22
  • 2
    Take a look at the answer to this question: http://stackoverflow.com/questions/3698532/online-k-means-clustering If I understand your problem correctly, that's pretty much what your looking for, where the initial `k` guesses are your first `k` values. – Pablo Jul 27 '11 at 17:23
  • Is what you are describing a histogram? I'm used to a histogram describing an interval broken up into equal sized sub-intervals, where the height is the count of points in that sub-interval. 0-100 in 100 bins equals 100 bins of size "1". 50 bins, would be bins of size "2". – Brett Stottlemyer Jul 27 '11 at 17:26
  • @Pablo- Post that as an answer! I'd upvote it. :-) – templatetypedef Jul 27 '11 at 17:40
  • @Pablo thanks for the link, but that algorithm suffers from the same problems as posted here. True, I could probably search for an implementation, but I'm not looking for an O(m*n) solution, where m is the number of bins and n is the amount of data. There, you're still iterating through all the guesses to determine whether x is closest to that guess. – kmore Jul 27 '11 at 17:40
  • @Brett histograms can have uniform- or non-uniform-sized bins. It's true, non-uniform-sized bins present additional challenges in their use, but I'm aware of them. – kmore Jul 27 '11 at 17:42
  • 1
    @kmore It's actually O(n*log(m)), if you keep your bin centers ordered, you can binary search into it, and the closest value is either the one before or the one after it – Pablo Jul 27 '11 at 17:59

2 Answers2

1

The main problem I see, is that you have created a system where you are constantly recalculating the histogram, in worst case for every new element.

What about something like this:

  1. For N bins, Binmin to Binmax, assign them to the initial value of your input
  2. For each new number X, if X is < Binmin set Binmin = X else if X > Binmax set Binmax = X
  3. If you changed the boundaries in 2, set each bin's value such that BinL = (Binmax - Binmin) / N * L, where L is the bin ordinal
  4. Add X to bin with closest value to X.

This is back of the napkin, so I'm sure there is a mistake somewhere. The idea is to only 'refactor' the histogram when a value falls outside of it, so your normal case all you need to do is add X to the bin that most closely matches it. I believe this should result in a very similar histogram if not equivalent. Step 1 is your initialization, steps 2-4 are a loop, if it's not clear.

Josh
  • 6,155
  • 2
  • 22
  • 26
  • If I'm reading correctly, this requires a priori knowledge of the nature of the data being read into the histogram--something that's impossible with online histograms. The idea is to approximate the underlying distribution of data, without knowing its bounds or range. – kmore Jul 27 '11 at 17:46
  • No, it does not require a priori knowledge. I think you are referring top step 1. You can pick the same number for all values if you wish, like 0. You could also make an educated guess. The only time you would run into a problem would be if you picked values out of bounds of the actual minimum and maximum - this would lead to a sub-optimal representation of the data. The other problem I can see with both my algorithm and yours would be that outliers will cause the same issue. – Josh Jul 27 '11 at 17:51
  • Right, so how would you know if you're picking values out of bounds of the actual minimum and maximum? Here's an example of how my algorithm approximates a normal distribution of 100,000 elements (mean=100, std dev=10): http://i.imgur.com/S96VB.png, so outliers are still handled appropriately – kmore Jul 27 '11 at 17:54
  • Well, easy solution, come to think of it. Set all bins to the initial value. – Josh Jul 27 '11 at 17:56
0

Posting this as an answer:

Take a look at the answer to this question: Online k-means clustering. If I understand your problem correctly, that's pretty much what your looking for, where the initial k guesses are your first k values.

If you keep your bin centers ordered, you can binary search into the list and the closest value is either the one before or the one after it, with an overall complexity of O(n*log(m)) where m is the number of bins and n is the amount of data.

Community
  • 1
  • 1
Pablo
  • 8,644
  • 2
  • 39
  • 29