10

I have an array of floats like this:

[1.91, 2.87, 3.61, 10.91, 11.91, 12.82, 100.73, 100.71, 101.89, 200]

Now, I want to partition the array like this:

[[1.91, 2.87, 3.61] , [10.91, 11.91, 12.82] , [100.73, 100.71, 101.89] , [200]]

// [200] will be considered as an outlier because of less cluster support

I have to find this kind of segment for several arrays and I don't know what should be the partition size. I tried to do it by using hierarchical clustering (Agglomerative) and it gives satisfactory results for me. However, issue is, I was suggested not to use clustering algorithms for one-dimensional problem as their is no theoretical justification (as they are for multidimensional data) to do that.

I spent lots of time to find solution. However, suggestions seem quite different like: this and this VS. this and this and this.

I found another suggestion rather than clustering i.e. natural breaks optimization. However, this also needs to declare the partition number like K-means (right ?).

It is quite confusing (specially because I have to perform those kind of segmentation on several arrays and it is impossible to know the optimal partition number).

Are there any ways to find partitions (thus we can reduce the variance within partitions and maximize the variance between partitions) with some theoretical justification?

Any pointers to article/papers (if available C/C++/Java implementation) with some theoretical justification will be very helpful for me.

Community
  • 1
  • 1
alessandro
  • 1,681
  • 10
  • 33
  • 54
  • I am curious as for why clustering does not fit for one dimensional data - what if you somehow increase the dimensionality, e.g., add sqrt(n) as a dimension, a bit like what happens in SVMs? – zw324 Jul 05 '13 at 01:53
  • @ZiyaoWei, "why clustering does not fit for one dimensional data" - truly I don't know. I was told in class that it is crazy to use clustering in 1-d data. but, i found no article stating why I can't (or can). – alessandro Jul 05 '13 at 01:56
  • 1
    @ZiyaoWei increasing dimention without reason does not seems a good solution. – alessandro Jul 05 '13 at 01:57
  • No it is not, just thinking that there's no real difference between one dimensional and multi dimensional data. Or are they? – zw324 Jul 05 '13 at 01:58
  • *"...reduce the variance within partitions and maximize the variance between partitions..."* If you tell us exactly what you mean by that, maybe we can help. Do you mean minimize ((average variance within a partition) - (average variance between partitions)), or what? – Beta Jul 05 '13 at 01:59
  • @Beta, "minimize each class’s average deviation from the class mean, while maximizing each class’s deviation from the means of the other groups" as in natural breaks optimization (http://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization#cite_ref-Jenks_1-1). – alessandro Jul 05 '13 at 02:02
  • @ZiyaoWei one dimensional data is ordered. 2 dimensional data is not. There *is* an essential difference, with an enormous impact on algorithmic complexity. – Has QUIT--Anony-Mousse Jul 05 '13 at 07:44
  • @Anony-Mousse 2 dimensional data could definitely be ordered if you want - Y axis first, X second, e.g.. But perhaps you are right. – zw324 Jul 05 '13 at 14:50

2 Answers2

10

I think I'd sort the data (if it's not already), then take adjacent differences. Divide the differences by the smaller of the numbers it's a difference between to get a percentage change. Set a threshold and when the change exceeds that threshold, start a new "cluster".

Edit: Quick demo code in C++:

#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#include <numeric>
#include <functional>

int main() {
    std::vector<double> data{ 
        1.91, 2.87, 3.61, 10.91, 11.91, 12.82, 100.73, 100.71, 101.89, 200 
    };

    // sort the input data
    std::sort(data.begin(), data.end());

    // find the difference between each number and its predecessor
    std::vector<double> diffs;
    std::adjacent_difference(data.begin(), data.end(), std::back_inserter(diffs));

    // convert differences to percentage changes
    std::transform(diffs.begin(), diffs.end(), data.begin(), diffs.begin(),
        std::divides<double>());

    // print out the results
    for (int i = 0; i < data.size(); i++) {

        // if a difference exceeds 40%, start a new group:
        if (diffs[i] > 0.4)
            std::cout << "\n";

        // print out an item:
        std::cout << data[i] << "\t";
    }

    return 0;
}

Result:

1.91    2.87    3.61
10.91   11.91   12.82
100.71  100.73  101.89
200
Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • can you kindly elaborate this ? I can't get it (may be in pseudo code if possible) ? – alessandro Jul 05 '13 at 02:07
  • I tried with larger sample. Looks like it doesn't work [78, 89, 74, 42, 89, 22, 48, 26, 28, 92, 100, 96, 35, 5, 70, 76, 11, 70, 12, 91, 7, 38, 19, 68, 58, 2, 89, 20, 30, 81, 95, 11, 97, 81, 86, 43, 52, 48, 71, 91, 4, 64, 94, 41, 82, 16, 35, 13, 57, 50] – deep_rugs Jul 19 '19 at 22:30
  • @deep_rugs: I think you've misunderstood the intent. When your data is sorted, there's only one break because there's no place in your data where there's a change of greater than 40% between one number and the next. If you care about changes with the data in its original order, remove the `std::sort` line and change `if (diffs[i] > 0.4)` to `if (std::abs(diff[i]) > 0.4)`. – Jerry Coffin Jul 20 '19 at 07:20
3

Clustering usually assumes multidimensional data.

If you have one dimensional data, sort it, and then use either kernel density estimation, or just scan for the largest gaps.

In 1 dimension, the problem gets substantially easier, because the data can be sorted. If you use a clustering algorithm, it will unfortunately not exploit this, so use a 1 dimensional method instead!

Consider finding the largest gap in 1 dimensional data. It's trivial: sort (n log n, but in practise as fast as it can get), then look at two adjacent values for the largest difference.

Now try defining "largest gap" in 2 dimensions, and an efficient algorithm to locate it...

Has QUIT--Anony-Mousse
  • 76,138
  • 12
  • 138
  • 194