You can empirically check it with a simple code:
int[] mVals = {11, 7, 9, 12};
for (int m : mVals) {
int[] cells = new int[m];
for (int i = 1; i<= 100; i++) {
int x = i*i % m;
cells[x]++;
}
System.out.println("m=" + m + " cells=" + Arrays.toString(cells));
}
Will yield:
m=11 cells=[9, 19, 0, 18, 18, 18, 0, 0, 0, 18, 0]
m=7 cells=[14, 29, 28, 0, 29, 0, 0]
m=9 cells=[33, 23, 0, 0, 22, 0, 0, 22, 0]
m=12 cells=[16, 33, 0, 0, 34, 0, 0, 0, 0, 17, 0, 0]
Since your values are in the specified range, you can see that the "worst" cell in the m=11 table has probability of 19/100
for elements to be inserted into it, while for all other values of m
- the highest probability is higher.
As for why, there are several factors at hand:
- Larger value of
m
is usually preferred - to understand it, make sure you understand what happens when m=1
(all elements are in one list), or m=2
(half of the elements in each of the two lists)
- Prime numbers are preferred and "behave nicer" for hash functions. This topic is discussed thoroughly in the thread Why should hash functions use a prime number modulus?. The idea is prime numbers are less vulnurable for bias from a specific domain of elements, and your set of squared numbers is one example of such.