I have a randomized recursive backtracking algorithm to generate Sudoku puzzles (see here). It works fast enough on average, but the worst-case runtime is unacceptably slow. Here is a histogram of the runtime in milliseconds for 100 trials (the "More" is about 200,000 ms!):
I would like to improve the algorithm by simply timing it out after t ms, and restarting with a new random seed. To prevent this from repeating infinitely, I would either stop after n tries or increase t after each failed try. If t is much larger than the median, there's a good chance of getting a much faster run on a subsequent try.
Questions:
- How can I adjust the timeout period t for different processors? Is there a fast, reliable way to benchmark processor performance prior to each run? Alternatively, should I adapt to the processor over several runs, for instance using the mean runtime of all previous runs? I am running this on Android, if that's relevant.
- Is there a better strategy entirely to avoid the long tail in the distribution of the runtimes?