The reason it would fail is because the original algorithm you were posting was utilizing some bit wise calculation. It is actually consider each bit of a int
type as a different number. And depending on the int
defined by your compiler, you might be limited to a maximum n
of either 32 or 64. You could also change all the int
declaration to int64_t
to force them to be defined in 64 bit.
However, doing that would still capped your maximum n
to 64. (In fact, it is capped to the bit size - 2
or 62
or 30
, so not entirely sure why you got stuck around 23) The real solution is to change all the int
to std::bitset
so that it could store a maximum n
of SIZE_MAX
.
#include <iostream>
#include <vector>
#include <bitset>
template<size_t bitSize>
void pretty_print(const std::vector<std::bitset<bitSize>>& c, std::bitset<bitSize> combo)
{
int n = c.size();
for (int i = 0; i < n; ++i) {
if (((combo >> i) & std::bitset < bitSize>(1)) != std::bitset <bitSize>(0))
std::cout << c[i].to_ullong() << ' ';
}
std::cout << std::endl;
}
template<size_t bitSize>
bool smallerThan(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
for (size_t i = bitSize - 1; i > 0; i--)
{
if (bits1[i] < bits2[i])
{
return true;
}
}
return false;
}
template<size_t bitSize>
std::bitset<bitSize> bitAddition(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
std::bitset<bitSize> carry;
while (bits2 != std::bitset<bitSize>(0))
{
carry = bits1 & bits2;
bits1 = bits1 ^ bits2;
bits2 = carry << 1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitSubtraction(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
while (bits2 != std::bitset<bitSize>(0))
{
std::bitset<bitSize> borrow = (~bits1) & bits2;
bits1 = bits1 ^ bits2;
bits2 = borrow << 1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitSubtractionStopAt0(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
while (bits2 != std::bitset<bitSize>(0))
{
std::bitset<bitSize> borrow = (~bits1) & bits2;
bits1 = bits1 ^ bits2;
bits2 = borrow << 1;
if (bits1 == std::bitset<bitSize>(0)) return bits1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitDivision(std::bitset<bitSize> dividend, std::bitset<bitSize> divisor)
{
std::bitset<bitSize> quotient(0);
while (smallerThan(std::bitset<bitSize>(0), dividend))
{
dividend = bitSubtractionStopAt0(dividend, divisor);
quotient = bitAddition(quotient, std::bitset<bitSize>(1));
}
return quotient;
}
template<size_t bitSize>
void combo(const std::vector<std::bitset<bitSize>>& c, int k)
{
auto n = c.size();
std::bitset<bitSize> one(1);
std::bitset<bitSize> combo(bitSubtraction((one << k), std::bitset<bitSize>(1)));
while (smallerThan(combo, (one << n)))
{
pretty_print(c, combo);
auto negCombo = combo;
negCombo.flip();
for (size_t i = 0; i < bitSize; i++)
{
negCombo.flip(i);
if (negCombo[i])
{
break;
}
}
std::bitset<bitSize> x = combo & negCombo;
std::bitset<bitSize> y;
bool tempBit = 0;
for (size_t i = 0; i < bitSize; i++)
{
y[i] = combo[i] ^ x[i];
if (tempBit)
{
if (!y[i])
{
tempBit = 0;
}
y[i] = y[i] ^ 1;
}
if (combo[i] & x[i])
{
tempBit = 1;
}
}
std::bitset<bitSize> z = (combo & ~y);
combo = bitDivision(z, x);
combo >>= 1;
combo |= y;
}
}
int main()
{
const int n = 500;
int k = 2;
std::bitset<(n + 2)> n_bits(n);
std::vector<std::bitset<n + 2>> people;
for (unsigned long i = 1; i < n_bits.to_ullong() + 1; ++i) { people.push_back(i); }
combo(people, k);
return 0;
}
I've only tested 90C4 and 500C2, but this should work for all n
smaller than SIZE_MAX
. Also I believe there are ways to optimize the bitwise calculations I've used better, not an expert on it.
Another approach is probably use a larger number type, such as int128_t
or int1024_t
. However, you would also need to overload some bitwise calculation.
Also you mentioned that:
For example there is a vector of {30,30,30,30,30,30,30,30,60,60,60,60,60,60,60,60,90,90,90,90,90,90,90,90}, when you try to find combination of this vector by 8, it does not do it correctly.
The way this algorithm work, it does not check the value of each members. Instead it automatically assume all members are unique. Similar to what a real nCr
function would do.
Also some of the other method's mentioned in Creating all possible k combinations of n items in C++ are quite fast as well, especially since I can't find a good way of implementing bitwise calculation yet D: I guess you could have some data structure to separate the number as several bitset<63>
object, so you could cast them to unsigned long long
to use its bitwise operators. However not sure if that would run faster.
Either way if you are doing nCk ≈ thousands
, then the difference for all those methods should be neglectable.