I am getting the sense that I am using an unordered_map incorrectly, in the sense that I am not getting the average O(1) complexity, but rather O(n).
Right now I define the map as:
unordered_map<my_class , double> objects_to_values;
The key thing is how I define my_class.
my_class at the moment is identified by three integers, call them a, b and c.
A skeleton looks like:
class my_class {
private:
int _a, _b, _c;
public:
my_class(int _a, int _b, int _c) {
this->_a = _a;
this->_b = _b;
this->_c = _c;
}
int a() const {
return _a;
}
int b() const {
return _b;
}
int c() const {
return _c;
}
bool operator==(const my_class& obj) const {
return ((a() == obj.a()) && (b() == obj.b()) && (c() == obj.c()));
}
};
In addition, I have
struct hash<my_class> {
public:
size_t operator()(const my_class& k) const {
return ( (hash<int>()(k.c())) ^ (( ((hash<int>()(k.a()))^ (hash<int>()(k.b())
<< 1) >> 1) << 1) >> 1) );
}
};
What I observe is that my code becomes much slower as the size of the unordered_map increases, probably because of my calls to find, which normally I would expect to be O(1) on average.
Am I doing it all right, or should I improve the hash function (which admitted is not great), or am I doing something wrong? Is there a better way to hash together three values, in the range of 0-100000 or so?