Semi-supervised means that you'd optimize (!) the clustering to produce the "optimum" results on the datat where you have labels, and expect it to then also cluster the unlabeled data well. This is hard to get working, depending on your data. For example with k-means you would likely optimize k to match the number of known clusters, but what about the not yet known clusters?
If you just want to see zow well your clustering method works, you do not need a train-test split. That serves the purpose of avoiding overfitting when optimizing parameters (and to that extend, to be overly optimistic on your real performance). When not using the labels in the method (as in clustering) and also not doing so for parameterization, then you can simply perform what is called "external evaluation". You re-add the labels to your data set and evsaluate how well the clustering agrees with your labels.
But beware, clusters can be good even if they do not agree with your labels. For example, your label migjt be "olympics", but the clustering produce a cluster for "swimming". It's a good cluster, even if it splits up your provided label (one may even argue that it is good because it does so, it improves your label!).
If all your data is labeled, always prefer classification! Don't attempt to optimize clustering to simulate classification.