I'm working on a project for which I need a very fast algorithm for checking whether a supplied number is pandigital. Though the logic seems sound, I'm not particularly happy with performance of the methods described below.
I can check up to one million 9-digit numbers in about 520ms, 600ms and 1600ms respectively. I'm working on a low-latency application and in production I'll have a dataset of about 9 or 9.5 billion 7- to 9-digit numbers that I'll need to check.
I have three candidiates right now (well, really two) that use the following logic:
Method 1: I take an input N
, split into into a byte array of its constituent digits, sort it using an Array.Sort
function and iterate over the array using a for
loop checking for element vs counter consistency:
byte[] Digits = SplitDigits(N);
int len = NumberLength(N);
Array.Sort(Digits);
for (int i = 0; i <= len - 1; i++)
{
if (i + 1 != Digits[i])
return false;
}
Method 2: This method is based on a bit of dubious logic, but I split the input N
into a byte array of constituent digits and then make the following test:
if (N * (N + 1) * 0.5 == DigitSum(N) && Factorial(len) == DigitProduct(N))
return true;
Method 3: I dislike this method, so not a real candidate but I cast the int to a string and then use String.Contains
to determine if the required string is pandigital.
The second and third method have fairly stable runtimes, though the first method bounces around a lot - it can go as high as 620ms at times.
So ideally I really like to reduce the runtime for the million 9-digit mark to under 10ms. Any thoughts?
I'm running this on a Pentium 6100 laptop at 2GHz.
PS - is the mathematical logic of the second method sound?