Are you trying to save space or time? You could always allocate a new "packed" list and copy/sort the fragmented list into the new "packed" list.
I'm wondering about the results in that prior answer, thinking that that the merge sort on packed list was operating on an already sorted list from the prior tests. In my testing, I started off with a large (bigger than cache) packed linked list where the values were psuedo-random integers, resulting in a sorted packed list where the nodes were pseudo-randomly ordered, what I call a "scattered" (non-cache friendly) linked list. I then filled that list with yet another set of pseudo random numbers to test merge sort on that "scattered" linked list.
The fastest linked list sort (assuming near random data) is a bottom up merge sort using a small (25 to 32) array of pointers (or references) to lists. Wiki article for the basic algorithm:
https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation_using_lists
The issue here is that the merge sort is working with a "scattered" linked list, which is not cache friendly.
As mentioned in the prior answer, but conflicting with the results, it should be faster to copy the linked list to an array, then sort the array with either quick sort or merge sort (either of these array sort methods would be cache friendly), then create a new sorted packed list from the now sorted array. This assumes that space isn't an issue with allocating the arrays or the packed list.
Other alternatives would be to create an array of pointers to nodes to sort the data in the list in place (without relinking the nodes), or creating an array of pointers to nodes, sorting the array of pointers, then relinking the list nodes according to the pointers. Again the issue here is that these methods would be working with a "scattered" linked list, which is not cache friendly.