4.37
(a) Worst-case (n – 1) comparisons
1. Scan the list from left to right, comparing every item with the first one.
for i = 2 … n: compare A[1] and A[i]
(-– exactly n – 1 comparisons).
2. The first time a difference is seen we know which value is 0 and which is 1;
count how many 0’s have already been observed while finishing the scan.
3. Overwrite the array with that many 0’s followed by the remaining 1’s
(no extra comparisons).
Optimality: an adversary can reply “=’’ to the first n – 2 comparisons, leaving the
algorithm unable to distinguish the all-0’s input from the all-1’s input.
Thus at least n – 1 comparisons are necessary and the algorithm is optimal.
(b) Average-case (2⁄3) n + O(1) comparisons
Group the items into disjoint triples.
For each complete triple (x, y, z) do exactly the two comparisons shown:
compare x and y # 1st comparison
if x ≠ y: compare min(x,y) and z # 2nd comparison
else: compare y and z # 2nd comparison
• Two comparisons per triple ⇒ 2⌊n⁄3⌋ ≤ (2⁄3) n
(+1 extra comparison if n ≡ 2 (mod 3)).
• After the pass we know, for every triple, how many 0’s it contains, hence the
total number k of 0’s in the whole input; writing k zeros followed by n – k ones
finishes the sort without further comparisons.
Because the algorithm always executes exactly the number of comparisons
counted above, its expected cost on the uniform
random input is also (2⁄3) n + O(1).
Optimality: for a single triple there are 8 equally likely input strings,
while one ternary comparison distinguishes at most 3 of them; therefore any
decision tree must perform at least 2 comparisons on average per triple.
Independence of triples gives a global lower bound of
2⌊n⁄3⌋ ≥ (2⁄3) n – 1 expected comparisons for every algorithm, so the procedure
above is optimal up to an additive constant.
Back to
Chapter 4