I am doing clustering using mcl. I am trying to "optimize" the clustering with respect to a quality score by tuning the inflation parameter I and a couple of other parameters I introduced.
I have questions with respect to this optimization:
1) Correct me if I am wrong: Cross validation is used when we try to predict the classes for new input. Therefore, this concept has no sense in the context of clustering when all the inputs are known and we just try to regroup them
2) I am planning on running experiments with different sets of my parameters and then selecting the ones that give me the best results. However, I read about clm close
and the possibility of using hierarchical clustering and going through the tree to find the best parameters. I am not familiar with hierarchical clustering, but how would this method outperform just testing different parameters?