Has anyone implemented Laszlo Babai's quasi-polynomial graph isomorphism?
http://people.cs.uchicago.edu/~laci/
I am not an expert in this area. I am wondering why not just implement his algorithm and run it to verify its time complexity?
Has anyone implemented Laszlo Babai's quasi-polynomial graph isomorphism?
http://people.cs.uchicago.edu/~laci/
I am not an expert in this area. I am wondering why not just implement his algorithm and run it to verify its time complexity?
Let's imagine that you did implement the algorithm. Could you use that to verify the time complexity of that algorithm empirically?
Unfortunately, the answer is no. If we say that the time complexity of an algorithm is O(f(n)), then we're saying that
So imagine that we did code it up. To see whether the bound that's claimed actually applies, we could try running it on a few inputs and plotting the runtime, but that wouldn't tell us anything. Our inputs might not be "large enough" for the asymptotic upper-bound to apply. If we did get to a large input and we saw the runtimes on a number of different inputs, we still wouldn't know whether we had the worst-case runtime unless we tried all possible inputs, but the number of possible inputs to a graph isomorphism algorithm grows exponentially as a function of the size of the input. That means that we couldn't brute-force try all possible inputs at large sizes, so we'd never be sure whether we found the actual worst-case input. There are a number of algorithms like the simplex algorithm for linear programming that are known to be very fast except in rare pathological cases, so we'd always risk that the runtime wouldn't match what we expected.
There's also a lot of practical concerns that would arise. The theoretical analysis of algorithms often doesn't account for things like cache behavior, paging, thrashing, etc., so we might see certain large inputs take a much longer time than expected simply because they don't play well with caches. In that case, we might see things run much slower than expected even though the analysis of the number of primitive operations is correct.
In short, no amount of running the algorithm on practical inputs will ever let you confirm or deny that the runtime analysis is correct. If it seems to match the trend, that's great... but what about all the infinitely many inputs you didn't try looking at? If it doesn't seem to match the predicted trend, how do you know that you just haven't tried sufficiently large inputs? Or that you're seeing noise in the analysis from other factors?
This is why it's tough to analyze algorithms like these. For all we know, we already have a polynomial time algorithm for graph isomorphism, but no one has been able to prove that it has the right runtime. No amount of empirical data will work as a proof, though it might motivate people to try to prove that a particular approach runs quickly as a way of theoretically justifying the observed runtime.
Practical graph isomorphism is in P. See Brendan McKay's C implementation of nauty.
nauty and Traces are programs for computing automorphism groups of graphs and digraphs. They can also produce a canonical labelling. nauty and Traces are written in a portable subset of C, and run on a considerable number of different systems.
Babai's result is only of theoretical interest.