12

Problem

Suppose I have a large array of bytes (think up to 4GB) containing some data. These bytes correspond to distinct objects in such a way that every s bytes (think s up to 32) will constitute a single object. One important fact is that this size s is the same for all objects, not stored within the objects themselves, and not known at compile time.

At the moment, these objects are logical entities only, not objects in the programming language. I have a comparison on these objects which consists of a lexicographical comparison of most of the object data, with a bit of different functionality to break ties using the remaining data. Now I want to sort these objects efficiently (this is really going to be a bottleneck of the application).

Ideas so far

I've thought of several possible ways to achieve this, but each of them appears to have some rather unfortunate consequences. You don't necessarily have to read all of these. I tried to print the central question of each approach in bold. If you are going to suggest one of these approaches, then your answer should respond to the related questions as well.

1. C quicksort

Of course the C quicksort algorithm is available in C++ applications as well. Its signature matches my requirements almost perfectly. But the fact that using that function will prohibit inlining of the comparison function will mean that every comparison carries a function invocation overhead. I had hoped for a way to avoid that. Any experience about how C qsort_r compares to STL in terms of performance would be very welcome.

2. Indirection using Objects pointing at data

It would be easy to write a bunch of objects holding pointers to their respective data. Then one could sort those. There are two aspects to consider here. On the one hand, just moving around pointers instead of all the data would mean less memory operations. On the other hand, not moving the objects would probably break memory locality and thus cache performance. Chances that the deeper levels of quicksort recursion could actually access all their data from a few cache pages would vanish almost completely. Instead, each cached memory page would yield only very few usable data items before being replaced. If anyone could provide some experience about the tradeoff between copying and memory locality I'd be very glad.

3. Custom iterator, reference and value objects

I wrote a class which serves as an iterator over the memory range. Dereferencing this iterator yields not a reference but a newly constructed object to hold the pointer to the data and the size s which is given at construction of the iterator. So these objects can be compared, and I even have an implementation of std::swap for these. Unfortunately, it appears that std::swap isn't enough for std::sort. In some parts of the process, my gcc implementation uses insertion sort (as implemented in __insertion_sort in file stl_alog.h) which moves a value out of the sequence, moves a number items by one step, and then moves the first value back into the sequence at the appropriate position:

          typename iterator_traits<_RandomAccessIterator>::value_type
            __val = _GLIBCXX_MOVE(*__i);
          _GLIBCXX_MOVE_BACKWARD3(__first, __i, __i + 1);
          *__first = _GLIBCXX_MOVE(__val);

Do you know of a standard sorting implementation which doesn't require a value type but can operate with swaps alone?

So I'd not only need my class which serves as a reference, but I would also need a class to hold a temporary value. And as the size of my objects is dynamic, I'd have to allocate that on the heap, which means memory allocations at the very leafs of the recusrion tree. Perhaps one alternative would be a vaue type with a static size that should be large enough to hold objects of the sizes I currently intend to support. But that would mean that there would be even more hackery in the relation between the reference_type and the value_type of the iterator class. And it would mean I would have to update that size for my application to one day support larger objects. Ugly.

If you can think of a clean way to get the above code to manipulate my data without having to allocate memory dynamically, that would be a great solution. I'm using C++11 features already, so using move semantics or similar won't be a problem.

4. Custom sorting

I even considered reimplementing all of quicksort. Perhaps I could make use of the fact that my comparison is mostly a lexicographical compare, i.e. I could sort sequences by first byte and only switch to the next byte when the firt byte is the same for all elements. I haven't worked out the details on this yet, but if anyone can suggest a reference, an implementation or even a canonical name to be used as a keyword for such a byte-wise lexicographical sorting, I'd be very happy. I'm still not convinced that with reasonable effort on my part I could beat the performance of the STL template implementation.

5. Completely different algorithm

I know there are many many kinds of sorting algorithms out there. Some of them might be better suited to my problem. Radix sort comes to my mind first, but I haven't really thought this through yet. If you can suggest a sorting algorithm more suited to my problem, please do so. Preferrably with implementation, but even without.

Question

So basically my question is this:
“How would you efficiently sort objects of dynamic size in heap memory?”

Any answer to this question which is applicable to my situation is good, no matter whether it is related to my own ideas or not. Answers to the individual questions marked in bold, or any other insight which might help me decide between my alternatives, would be useful as well, particularly if no definite answer to a single approach turns up.

MvG
  • 57,380
  • 22
  • 148
  • 276

6 Answers6

2

The most practical solution is to use the C style qsort that you mentioned.

template <unsigned S>
struct my_obj {
    enum { SIZE = S; };
    const void *p_;
    my_obj (const void *p) : p_(p) {}
    //...accessors to get data from pointer
    static int c_style_compare (const void *a, const void *b) {
        my_obj aa(a);
        my_obj bb(b);
        return (aa < bb) ? -1 : (bb < aa);
    }
};

template <unsigned N, typename OBJ>
void my_sort (const char (&large_array)[N], const OBJ &) {
    qsort(large_array, N/OBJ::SIZE, OBJ::SIZE, OBJ::c_style_compare);
}

(Or, you can call qsort_r if you prefer.) Since STL sort inlines the comparision calls, you may not get the fastest possible sorting. If all your system does is sorting, it may be worth it to add the code to get custom iterators to work. But, if most of the time your system is doing something other than sorting, the extra gain you get may just be noise to your overall system.

jxh
  • 69,070
  • 8
  • 110
  • 193
  • Using a compile-time parameter `S` means I'd have to instantiate things for every possible size *s*. While that is possible for the problems I'll be able to tackle in the forseeable future, I guess that for this case I'd rather use `qsort_r` so that I can pass the size as a runtime parameter. And yes, this application will read input, sort it, do some duplicate handling and write output, so sorting will constitute the bulk of the operation. – MvG Jul 19 '12 at 18:25
  • @MvG: Good point. I edited my answer so that the size is a trait of the object rather than the call to `my_sort`. This allows you to implement the comparison once in the template, although multiple instances of the function will get created. – jxh Jul 19 '12 at 19:01
1

If you can overlay an object onto your buffer, then you can use std::sort, as long as your overlay type is copyable. (In this example, 4 64bit integers). With 4GB of data, you're going to need a lot of memory though.

As discussed in the comments, you can have a selection of possible sizes based on some number of fixed size templates. You would have to have pick from these types at runtime (using a switch statement, for example). Here's an example of the template type with various sizes and example of sorting the 64bit size.

Here's a simple example:

#include <vector>
#include <algorithm>
#include <iostream>
#include <ctime>

template <int WIDTH>
struct variable_width
{
   unsigned char w_[WIDTH];
};

typedef variable_width<8> vw8;
typedef variable_width<16> vw16;
typedef variable_width<32> vw32;
typedef variable_width<64> vw64;
typedef variable_width<128> vw128;
typedef variable_width<256> vw256;
typedef variable_width<512> vw512;
typedef variable_width<1024> vw1024;

bool operator<(const vw64& l, const vw64& r)
{
   const __int64* l64 = reinterpret_cast<const __int64*>(l.w_);
   const __int64* r64 = reinterpret_cast<const __int64*>(r.w_);

   return *l64 < *r64;
}

std::ostream& operator<<(std::ostream& out, const vw64& w)
{
   const __int64* w64 = reinterpret_cast<const __int64*>(w.w_);
   std::cout << *w64;
   return out;
}

int main()
{
   srand(time(NULL));
   std::vector<unsigned char> buffer(10 * sizeof(vw64));
   vw64* w64_arr = reinterpret_cast<vw64*>(&buffer[0]);

   for(int x = 0; x < 10; ++x)
   {
      (*(__int64*)w64_arr[x].w_) = rand();
   }

   std::sort(
      w64_arr,
      w64_arr + 10);

   for(int x = 0; x < 10; ++x)
   {
      std::cout << w64_arr[x] << '\n';
   }

   std::cout << std::endl;

   return 0;
}
Chad
  • 18,706
  • 4
  • 46
  • 63
  • In your approach, the object size is fixed at compile-time. Large enough to hold the amount of data as I specified, but when used with smaller objects, there could be quite a lot of memory waste, as only a fraction of each allocated object would actually be used. i had to recompile the application and change the size of the object class if I wanted to support larger data items in the future. On the whole: a possible but very static approach. – MvG Jul 19 '12 at 14:20
  • Ahh, I didn't realize that the item sizes could be different at runtime. Do you have a practical set of sizes that can be used? You could do much the same thing with a set of defined `template` classes that would handle most cases... – Chad Jul 19 '12 at 14:25
  • I have an infinite sequence of possible sizes, each corresponding to a larger class of problems than the one before. Ideally I'd like to solve any of them, but practically I'll be restricted to the first few on todays hardware. But what is infeasible today may well become practical tomorrow, and I'd hate having to edit my code to accomodate that fact, even if I certainly could. – MvG Jul 19 '12 at 14:28
  • If you edit your answer to that template style, then I'll give you an upvote: using 64bit integers to transfer data seems a good idea in any case, and with those I could cover a large range of possible sizes with a few template instantiations. I guess that could cater for the next few years at least. – MvG Jul 19 '12 at 14:45
  • I've updated, hope it helps at least a bit. The template version requires a lot more casting, so it's not as pretty to look at, but it still "works". – Chad Jul 19 '12 at 14:55
  • There are still a number of problems to your code. The comparator doesn't lex-compare anymore, so it can only serve as an example. The text still talks about a fixed 4 elements object, which maches `main` but not the first half of the code. – MvG Jul 19 '12 at 15:21
1

I'd agree with std::sort using a custom iterator, reference and value type; it's best to use the standard machinery where possible.

You worry about memory allocations, but modern memory allocators are very efficient at handing out small chunks of memory, particularly when being repeatedly reused. You could also consider using your own (stateful) allocator, handing out length s chunks from a small pool.

ecatmur
  • 152,476
  • 27
  • 293
  • 366
  • Are memory allocations inlined? If not, I still worry about even the function call overhead. The custom allocation mechanism seems like a good idea; I could even put the state into the iterator and the machinery into the reference to value cast operator, as I'd prefer to avoid static variables and the sorting algorithm doesn't take an allocator object. – MvG Jul 19 '12 at 14:25
  • @MvG I don't believe memory allocation is inlined typically, but the processor will be able to apply indirect branch prediction which should lessen the overhead. – ecatmur Jul 19 '12 at 14:31
1

Given the enormous size (4GB), I would seriously consider dynamic code generation. Compile a custom sort into a shared library, and dynamically load it. The only non-inlined call should be the call into the library.

With precompiled headers, the compilation times may actually be not that bad. The whole <algorithm> header doesn't change, nor does your wrapper logic. You just need to recompile a single predicate each time. And since it's a single function you get, linking is trivial.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • Well, pretty much all sorting algorithms I know of use recursion, and most libraries implement that using function recursion. Since you cannot inline recursively called functions, there is a limit there. Unless the compiler is really clever or you manage a call stack yourself. By *dynamic code generation* you mean generate the code for the size I'll need? I'm a bit worried about requiring a full compiler at runtime. – MvG Jul 19 '12 at 18:31
  • To clarify, I mean inlining the predicate into the sort function itself. The recursive call itself is unlikely to be inlined too deep. But yes, the [g++] tag was the reason to suggest this - you are allowed to distribute GCC. – MSalters Jul 19 '12 at 18:38
1

Since there are only 31 different object variations (1 to 32 bytes), you could easily create an object type for each and select a call to std::sort based on a switch statement. Each call will get inlined and highly optimized.

Some object sizes might require a custom iterator, as the compiler will insist on padding native objects to align to address boundaries. Pointers can be used as iterators in the other cases since a pointer has all the properties of an iterator.

Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
0
#define OBJECT_SIZE 32
struct structObject
{
    unsigned char* pObject;
    bool operator < (const structObject &n) const
    {
        for(int i=0; i<OBJECT_SIZE; i++)
        {
            if(*(pObject + i) != *(n.pObject + i))
                return (*(pObject + i) < *(n.pObject + i));
        }

        return false;       
    }
};

int _tmain(int argc, _TCHAR* argv[])
{       
    std::vector<structObject> vObjects;
    unsigned char* pObjects = (unsigned char*)malloc(10 * OBJECT_SIZE); // 10 Objects


    for(int i=0; i<10; i++)
    {
        structObject stObject;
        stObject.pObject = pObjects + (i*OBJECT_SIZE);      
        *stObject.pObject = 'A' + 9 - i; // Add a value to the start to check the sort
        vObjects.push_back(stObject);
    }

    std::sort(vObjects.begin(), vObjects.end());


    free(pObjects);

To skip the #define

struct structObject
{
    unsigned char* pObject; 
};

struct structObjectComparerAscending 
{
    int iSize;

    structObjectComparerAscending(int _iSize)
    {
        iSize = _iSize;
    }

    bool operator ()(structObject &stLeft, structObject &stRight)
    { 
        for(int i=0; i<iSize; i++)
        {
            if(*(stLeft.pObject + i) != *(stRight.pObject + i))
                return (*(stLeft.pObject + i) < *(stRight.pObject + i));
        }

        return false;       
    }
};

int _tmain(int argc, _TCHAR* argv[])
{   
    int iObjectSize = 32; // Read it from somewhere

    std::vector<structObject> vObjects;
    unsigned char* pObjects = (unsigned char*)malloc(10 * iObjectSize);

    for(int i=0; i<10; i++)
    {
        structObject stObject;
        stObject.pObject = pObjects + (i*iObjectSize);      
        *stObject.pObject = 'A' + 9 - i; // Add a value to the start to work with something...  
        vObjects.push_back(stObject);
    }

    std::sort(vObjects.begin(), vObjects.end(), structObjectComparerAscending(iObjectSize));


    free(pObjects);
João Augusto
  • 2,285
  • 24
  • 28
  • So you'd go for my approach #2, sorting pointers instead of the actual object data blocks. Definitely the easiest solution, so I'll use that for prototyping while waiting for more answers, but my concerns about memory locality remain, and you haven't addressed these. Also note that the constant nature of you `OBJECT_SIZE` macro doesn't really reflect the dynamic aspect of my problem. – MvG Jul 19 '12 at 15:18
  • @MvG if the size of a objects data is small, then you should probably be better not working with the pointers and do copys, but your 's' is dynamic. Maybe you should think of using MapReduce to resolve your problem. – João Augusto Jul 19 '12 at 16:46
  • I hadn't known about map-reduce before, but now that I look at it, it seems that I already have that kinf od setup in mind, and would use this sorting here as one step in the process. – MvG Jul 19 '12 at 18:36