Most sorting algorithms rely on the correct implementation of a comparison function that returns the ordering between two elements – that is, a function that takes two elements and returns whether the first is less than, equal to, or greater than the second:
function compare(x, y) {
if (x < y) {
return LESS_THAN;
} else if (x > y) {
return GREATER_THAN;
} else {
return EQUAL;
}
}
The comparison function is required to define a valid total ordering on the elements in the sequence in order for the sorting algorithm to make any sense.
It is tempting to ask whether using a random comparison function would offer a clever way to shuffle a sequence. The short answer is no, generally speaking. It’s really not a good idea to do that even if it does somehow magically work. The correct solution is to use the Fisher-Yates shuffling algorithm.
From an engineering perspective, sorting functions are designed to sort things. Abusing it to shuffle an array is kind of just asking for trouble, because you never know if a change in the sorting algorithm down the road might break the “shuffling” algorithm. Worse, the breakage could be subtle, creating biases in the output that might be hard to detect unless the tests rigorously verify the statistics. Moreover, some sorting algorithms can run unusually slow with a random comparator, or take a very long time if the randomness was not in their favor.
From a mathematical perspective, bounded-time deterministic comparison sorts just can’t give you a high quality distribution of random permutations! In fact, if you use a random comparator with an equal probability of returning LESS_THAN
or GREATER_THAN
(a “uniformly random comparator”), it’s provably impossible for a such an algorithm to shuffle correctly.
Suppose you have an array of size N
. The probability of any element (say, the first element) ending up at any position (say, the last position) is always 1/N
in a perfect shuffle.
At every step of a deterministic comparison sort algorithm, it must make some sort of binary decision based on the ordering between two selected elements, leading to a binary decision tree. The outcome of these decisions determine the final sorted output.
Now if you feed it a uniformly random comparator, the decisions made are always random. This means every decision has a 1/2
probability of going one way or another. This leads to a path p
down the decision tree that eventually terminates after n[p]
rounds, since we are assuming bounded time. Certain paths in this decision tree will lead to the scenario where the first element ends up at the last position. Thus, the overall probability of this scenario is the sum of the probability of every path p
that leads to the desired scenario:
∑[p ∈ all_desired_paths] (1/2)^(n[p])
Adding up powers of 1/2
will always yield a fraction where the denominator is some power of 2
. Since the correct probability is 1/N
, unless N
is also a power of two, there is simply no way for the result to be correct! Hence, there’s no way to get the exact probability 1/N
in general.
Edit: Chris P. pointed out that the algorithm must have bounded time for this proof to work. Thanks!