cphb/chapter03.tex

954 lines
26 KiB
TeX
Raw Normal View History

2016-12-28 23:54:51 +01:00
\chapter{Sorting}
2016-12-29 23:17:22 +01:00
\index{sorting}
\key{Sorting}
is a fundamental algorithm design problem.
2017-02-13 20:42:16 +01:00
Many efficient algorithms
2016-12-29 23:17:22 +01:00
use sorting as a subroutine,
because it is often easier to process
data if the elements are in a sorted order.
2017-02-13 20:42:16 +01:00
For example, the problem ''does the array contain
2016-12-29 23:17:22 +01:00
two equal elements?'' is easy to solve using sorting.
If the array contains two equal elements,
they will be next to each other after sorting,
so it is easy to find them.
2017-02-13 20:42:16 +01:00
Also the problem ''what is the most frequent element
2016-12-29 23:17:22 +01:00
in the array?'' can be solved similarly.
2017-02-13 20:42:16 +01:00
There are many algorithms for sorting, and they are
also good examples of how to apply
different algorithm design techniques.
2016-12-29 23:17:22 +01:00
The efficient general sorting algorithms
work in $O(n \log n)$ time,
and many algorithms that use sorting
as a subroutine also
have this time complexity.
\section{Sorting theory}
The basic problem in sorting is as follows:
2016-12-28 23:54:51 +01:00
\begin{framed}
\noindent
2016-12-29 23:17:22 +01:00
Given an array that contains $n$ elements,
your task is to sort the elements
in increasing order.
2016-12-28 23:54:51 +01:00
\end{framed}
\noindent
2016-12-29 23:17:22 +01:00
For example, the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$8$};
\node at (3.5,0.5) {$2$};
\node at (4.5,0.5) {$9$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$6$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
will be as follows after sorting:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$6$};
\node at (6.5,0.5) {$8$};
\node at (7.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
\subsubsection{$O(n^2)$ algorithms}
\index{bubble sort}
Simple algorithms for sorting an array
work in $O(n^2)$ time.
Such algorithms are short and usually
consist of two nested loops.
2017-01-30 21:37:37 +01:00
A famous $O(n^2)$ time sorting algorithm
2016-12-29 23:17:22 +01:00
is \key{bubble sort} where the elements
2017-01-30 21:37:37 +01:00
''bubble'' in the array according to their values.
2016-12-29 23:17:22 +01:00
Bubble sort consists of $n-1$ rounds.
On each round, the algorithm iterates through
2017-02-13 20:42:16 +01:00
the elements of the array.
2017-01-30 21:37:37 +01:00
Whenever two consecutive elements are found
2016-12-29 23:17:22 +01:00
that are not in correct order,
the algorithm swaps them.
The algorithm can be implemented as follows
2017-02-13 20:42:16 +01:00
for an array
2016-12-28 23:54:51 +01:00
$\texttt{t}[1],\texttt{t}[2],\ldots,\texttt{t}[n]$:
\begin{lstlisting}
2017-02-13 20:42:16 +01:00
for (int i = 1; i <= n-1; i++) {
2016-12-28 23:54:51 +01:00
for (int j = 1; j <= n-i; j++) {
if (t[j] > t[j+1]) swap(t[j],t[j+1]);
}
}
\end{lstlisting}
2016-12-29 23:17:22 +01:00
After the first round of the algorithm,
2017-01-30 21:37:37 +01:00
the largest element will be in the correct position,
and in general, after $k$ rounds, the $k$ largest
elements will be in the correct positions.
2017-02-13 20:42:16 +01:00
Thus, after $n-1$ rounds, the whole array
2016-12-29 23:17:22 +01:00
will be sorted.
2016-12-28 23:54:51 +01:00
2016-12-29 23:17:22 +01:00
For example, in the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$8$};
\node at (3.5,0.5) {$2$};
\node at (4.5,0.5) {$9$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$6$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\noindent
2016-12-29 23:17:22 +01:00
the first round of bubble sort swaps elements
as follows:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$8$};
\node at (4.5,0.5) {$9$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$6$};
\draw[thick,<->] (3.5,-0.25) .. controls (3.25,-1.00) and (2.75,-1.00) .. (2.5,-0.25);
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$8$};
\node at (4.5,0.5) {$2$};
\node at (5.5,0.5) {$9$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$6$};
\draw[thick,<->] (5.5,-0.25) .. controls (5.25,-1.00) and (4.75,-1.00) .. (4.5,-0.25);
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$8$};
\node at (4.5,0.5) {$2$};
\node at (5.5,0.5) {$5$};
\node at (6.5,0.5) {$9$};
\node at (7.5,0.5) {$6$};
\draw[thick,<->] (6.5,-0.25) .. controls (6.25,-1.00) and (5.75,-1.00) .. (5.5,-0.25);
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$8$};
\node at (4.5,0.5) {$2$};
\node at (5.5,0.5) {$5$};
\node at (6.5,0.5) {$6$};
\node at (7.5,0.5) {$9$};
\draw[thick,<->] (7.5,-0.25) .. controls (7.25,-1.00) and (6.75,-1.00) .. (6.5,-0.25);
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
\subsubsection{Inversions}
\index{inversion}
Bubble sort is an example of a sorting
2017-01-30 21:37:37 +01:00
algorithm that always swaps consecutive
2016-12-29 23:17:22 +01:00
elements in the array.
It turns out that the time complexity
2017-01-30 21:37:37 +01:00
of such an algorithm is \emph{always}
2017-02-13 20:42:16 +01:00
at least $O(n^2)$, because in the worst case,
2016-12-29 23:17:22 +01:00
$O(n^2)$ swaps are required for sorting the array.
A useful concept when analyzing sorting
2017-01-30 21:37:37 +01:00
algorithms is an \key{inversion}:
a pair of elements
2016-12-29 23:17:22 +01:00
$(\texttt{t}[a],\texttt{t}[b])$
in the array such that
$a<b$ and $\texttt{t}[a]>\texttt{t}[b]$,
2017-01-30 21:37:37 +01:00
i.e., the elements are in the wrong order.
2016-12-29 23:17:22 +01:00
For example, in the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$6$};
\node at (4.5,0.5) {$3$};
\node at (5.5,0.5) {$5$};
\node at (6.5,0.5) {$9$};
\node at (7.5,0.5) {$8$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
the inversions are $(6,3)$, $(6,5)$ and $(9,8)$.
2017-02-13 20:42:16 +01:00
The number of inversions tells us
2017-01-30 21:37:37 +01:00
how much work is needed to sort the array.
2016-12-29 23:17:22 +01:00
An array is completely sorted when
there are no inversions.
On the other hand, if the array elements
2017-01-30 21:37:37 +01:00
are in the reverse order,
the number of inversions is the largest possible:
2016-12-29 23:17:22 +01:00
\[1+2+\cdots+(n-1)=\frac{n(n-1)}{2} = O(n^2)\]
2017-01-30 21:37:37 +01:00
Swapping a pair of consecutive elements that are
in the wrong order removes exactly one inversion
2016-12-29 23:17:22 +01:00
from the array.
2017-01-30 21:37:37 +01:00
Hence, if a sorting algorithm can only
swap consecutive elements, each swap removes
2016-12-29 23:17:22 +01:00
at most one inversion and the time complexity
of the algorithm is at least $O(n^2)$.
\subsubsection{$O(n \log n)$ algorithms}
\index{merge sort}
It is possible to sort an array efficiently
2017-01-30 21:37:37 +01:00
in $O(n \log n)$ time using algorithms
that are not limited to swapping consecutive elements.
2017-02-25 17:21:27 +01:00
One such algorithm is \key{mergesort}\footnote{According to \cite{knu983},
2017-02-25 15:51:29 +01:00
mergesort was invented by J. von Neumann in 1945.}
2017-01-30 21:37:37 +01:00
that is based on recursion.
2016-12-29 23:17:22 +01:00
2017-01-30 21:37:37 +01:00
Mergesort sorts a subarray \texttt{t}$[a,b]$ as follows:
2016-12-28 23:54:51 +01:00
\begin{enumerate}
2017-02-13 20:42:16 +01:00
\item If $a=b$, do not do anything, because the subarray is already sorted.
\item Calculate the position of the middle element: $k=\lfloor (a+b)/2 \rfloor$.
2017-01-30 21:37:37 +01:00
\item Recursively sort the subarray \texttt{t}$[a,k]$.
\item Recursively sort the subarray \texttt{t}$[k+1,b]$.
\item \emph{Merge} the sorted subarrays \texttt{t}$[a,k]$ and \texttt{t}$[k+1,b]$
into a sorted subarray \texttt{t}$[a,b]$.
2016-12-28 23:54:51 +01:00
\end{enumerate}
2017-02-13 20:42:16 +01:00
Mergesort is an efficient algorithm, because it
2016-12-29 23:17:22 +01:00
halves the size of the subarray at each step.
The recursion consists of $O(\log n)$ levels,
and processing each level takes $O(n)$ time.
2017-01-30 21:37:37 +01:00
Merging the subarrays \texttt{t}$[a,k]$ and \texttt{t}$[k+1,b]$
2017-02-13 20:42:16 +01:00
is possible in linear time, because they are already sorted.
2016-12-28 23:54:51 +01:00
2016-12-29 23:17:22 +01:00
For example, consider sorting the following array:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$6$};
\node at (3.5,0.5) {$2$};
\node at (4.5,0.5) {$8$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
The array will be divided into two subarrays
as follows:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (4,1);
\draw (5,0) grid (9,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$6$};
\node at (3.5,0.5) {$2$};
\node at (5.5,0.5) {$8$};
\node at (6.5,0.5) {$2$};
\node at (7.5,0.5) {$5$};
\node at (8.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (5.5,1.4) {$5$};
\node at (6.5,1.4) {$6$};
\node at (7.5,1.4) {$7$};
\node at (8.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
Then, the subarrays will be sorted recursively
as follows:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (4,1);
\draw (5,0) grid (9,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$3$};
\node at (3.5,0.5) {$6$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$8$};
\node at (8.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (5.5,1.4) {$5$};
\node at (6.5,1.4) {$6$};
\node at (7.5,1.4) {$7$};
\node at (8.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
Finally, the algorithm merges the sorted
subarrays and creates the final sorted array:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$6$};
\node at (6.5,0.5) {$8$};
\node at (7.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
\subsubsection{Sorting lower bound}
2016-12-28 23:54:51 +01:00
2016-12-29 23:17:22 +01:00
Is it possible to sort an array faster
than in $O(n \log n)$ time?
It turns out that this is \emph{not} possible
when we restrict ourselves to sorting algorithms
that are based on comparing array elements.
2016-12-28 23:54:51 +01:00
2016-12-29 23:17:22 +01:00
The lower bound for the time complexity
2017-02-13 20:42:16 +01:00
can be proved by considering sorting
2016-12-29 23:17:22 +01:00
as a process where each comparison of two elements
gives more information about the contents of the array.
The process creates the following tree:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) rectangle (3,1);
\node at (1.5,0.5) {$x < y?$};
\draw[thick,->] (1.5,0) -- (-2.5,-1.5);
\draw[thick,->] (1.5,0) -- (5.5,-1.5);
\draw (-4,-2.5) rectangle (-1,-1.5);
\draw (4,-2.5) rectangle (7,-1.5);
\node at (-2.5,-2) {$x < y?$};
\node at (5.5,-2) {$x < y?$};
\draw[thick,->] (-2.5,-2.5) -- (-4.5,-4);
\draw[thick,->] (-2.5,-2.5) -- (-0.5,-4);
\draw[thick,->] (5.5,-2.5) -- (3.5,-4);
\draw[thick,->] (5.5,-2.5) -- (7.5,-4);
\draw (-6,-5) rectangle (-3,-4);
\draw (-2,-5) rectangle (1,-4);
\draw (2,-5) rectangle (5,-4);
\draw (6,-5) rectangle (9,-4);
\node at (-4.5,-4.5) {$x < y?$};
\node at (-0.5,-4.5) {$x < y?$};
\node at (3.5,-4.5) {$x < y?$};
\node at (7.5,-4.5) {$x < y?$};
\draw[thick,->] (-4.5,-5) -- (-5.5,-6);
\draw[thick,->] (-4.5,-5) -- (-3.5,-6);
\draw[thick,->] (-0.5,-5) -- (0.5,-6);
\draw[thick,->] (-0.5,-5) -- (-1.5,-6);
\draw[thick,->] (3.5,-5) -- (2.5,-6);
\draw[thick,->] (3.5,-5) -- (4.5,-6);
\draw[thick,->] (7.5,-5) -- (6.5,-6);
\draw[thick,->] (7.5,-5) -- (8.5,-6);
\end{tikzpicture}
\end{center}
2016-12-29 23:17:22 +01:00
Here ''$x<y?$'' means that some elements
$x$ and $y$ are compared.
If $x<y$, the process continues to the left,
and otherwise to the right.
The results of the process are the possible
2017-01-30 21:37:37 +01:00
ways to sort the array, a total of $n!$ ways.
2016-12-29 23:17:22 +01:00
For this reason, the height of the tree
must be at least
2016-12-28 23:54:51 +01:00
\[ \log_2(n!) = \log_2(1)+\log_2(2)+\cdots+\log_2(n).\]
2016-12-29 23:17:22 +01:00
We get an lower bound for this sum
by choosing last $n/2$ elements and
changing the value of each element to $\log_2(n/2)$.
This yields an estimate
2016-12-28 23:54:51 +01:00
\[ \log_2(n!) \ge (n/2) \cdot \log_2(n/2),\]
2016-12-29 23:17:22 +01:00
so the height of the tree and the minimum
2017-01-30 21:37:37 +01:00
possible number of steps in a sorting
2016-12-29 23:17:22 +01:00
algorithm in the worst case
is at least $n \log n$.
\subsubsection{Counting sort}
\index{counting sort}
2017-01-30 21:37:37 +01:00
The lower bound $n \log n$ does not apply to
2016-12-29 23:17:22 +01:00
algorithms that do not compare array elements
but use some other information.
An example of such an algorithm is
\key{counting sort} that sorts an array in
$O(n)$ time assuming that every element in the array
2017-02-25 11:47:24 +01:00
is an integer between $0 \ldots c$ and $c=O(n)$.
2016-12-29 23:17:22 +01:00
The algorithm creates a \emph{bookkeeping} array
whose indices are elements in the original array.
The algorithm iterates through the original array
and calculates how many times each element
appears in the array.
For example, the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$3$};
\node at (2.5,0.5) {$6$};
\node at (3.5,0.5) {$9$};
\node at (4.5,0.5) {$9$};
\node at (5.5,0.5) {$3$};
\node at (6.5,0.5) {$5$};
\node at (7.5,0.5) {$9$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
2017-01-30 21:37:37 +01:00
corresponds to the following bookkeeping array:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (9,1);
\node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$0$};
\node at (2.5,0.5) {$2$};
\node at (3.5,0.5) {$0$};
\node at (4.5,0.5) {$1$};
\node at (5.5,0.5) {$1$};
\node at (6.5,0.5) {$0$};
\node at (7.5,0.5) {$0$};
\node at (8.5,0.5) {$3$};
\footnotesize
\node at (0.5,1.5) {$1$};
\node at (1.5,1.5) {$2$};
\node at (2.5,1.5) {$3$};
\node at (3.5,1.5) {$4$};
\node at (4.5,1.5) {$5$};
\node at (5.5,1.5) {$6$};
\node at (6.5,1.5) {$7$};
\node at (7.5,1.5) {$8$};
\node at (8.5,1.5) {$9$};
\end{tikzpicture}
\end{center}
2017-02-13 20:42:16 +01:00
For example, the value at position 3
2016-12-29 23:17:22 +01:00
in the bookkeeping array is 2,
2017-01-30 21:37:37 +01:00
because the element 3 appears 2 times
2017-02-13 20:42:16 +01:00
in the original array (positions 2 and 6).
2016-12-29 23:17:22 +01:00
The construction of the bookkeeping array
takes $O(n)$ time. After this, the sorted array
can be created in $O(n)$ time because
2017-01-30 21:37:37 +01:00
the number of occurrences of each element can be retrieved
2016-12-29 23:17:22 +01:00
from the bookkeeping array.
Thus, the total time complexity of counting
sort is $O(n)$.
Counting sort is a very efficient algorithm
but it can only be used when the constant $c$
is so small that the array elements can
be used as indices in the bookkeeping array.
2016-12-28 23:54:51 +01:00
2016-12-30 00:15:51 +01:00
\section{Sorting in C++}
2016-12-28 23:54:51 +01:00
\index{sort@\texttt{sort}}
2017-02-13 20:42:16 +01:00
It is almost never a good idea to use
a self-made sorting algorithm
2016-12-30 00:15:51 +01:00
in a contest, because there are good
implementations available in programming languages.
For example, the C++ standard library contains
the function \texttt{sort} that can be easily used for
sorting arrays and other data structures.
There are many benefits in using a library function.
First, it saves time because there is no need to
implement the function.
In addition, the library implementation is
certainly correct and efficient: it is not probable
2017-02-13 20:42:16 +01:00
that a self-made sorting function would be better.
2016-12-30 00:15:51 +01:00
In this section we will see how to use the
C++ \texttt{sort} function.
2017-01-30 21:37:37 +01:00
The following code sorts
a vector in increasing order:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
vector<int> v = {4,2,5,3,5,8,3};
sort(v.begin(),v.end());
\end{lstlisting}
2016-12-30 00:15:51 +01:00
After the sorting, the contents of the
vector will be
$[2,3,3,4,5,5,8]$.
2017-01-30 21:37:37 +01:00
The default sorting order is increasing,
2016-12-30 00:15:51 +01:00
but a reverse order is possible as follows:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
sort(v.rbegin(),v.rend());
\end{lstlisting}
2017-02-13 20:42:16 +01:00
An ordinary array can be sorted as follows:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
2016-12-30 00:15:51 +01:00
int n = 7; // array size
2016-12-28 23:54:51 +01:00
int t[] = {4,2,5,3,5,8,3};
sort(t,t+n);
\end{lstlisting}
2016-12-30 00:15:51 +01:00
The following code sorts the string \texttt{s}:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
2016-12-30 00:15:51 +01:00
string s = "monkey";
2016-12-28 23:54:51 +01:00
sort(s.begin(), s.end());
\end{lstlisting}
2016-12-30 00:15:51 +01:00
Sorting a string means that the characters
2017-02-27 20:29:32 +01:00
of the string are sorted.
2016-12-30 00:15:51 +01:00
For example, the string ''monkey'' becomes ''ekmnoy''.
2017-02-13 20:42:16 +01:00
\subsubsection{Comparison operators}
2016-12-30 00:15:51 +01:00
\index{comparison operator}
The function \texttt{sort} requires that
a \key{comparison operator} is defined for the data type
of the elements to be sorted.
During the sorting, this operator will be used
whenever it is needed to find out the order of two elements.
2017-02-13 20:42:16 +01:00
Most C++ data types have a built-in comparison operator,
2016-12-30 00:15:51 +01:00
and elements of those types can be sorted automatically.
For example, numbers are sorted according to their values
2017-01-30 21:37:37 +01:00
and strings are sorted in alphabetical order.
2016-12-28 23:54:51 +01:00
\index{pair@\texttt{pair}}
2017-02-13 20:42:16 +01:00
Pairs (\texttt{pair}) are sorted primarily by their first
elements (\texttt{first}).
2016-12-30 00:15:51 +01:00
However, if the first elements of two pairs are equal,
2017-02-13 20:42:16 +01:00
they are sorted by their second elements (\texttt{second}):
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
vector<pair<int,int>> v;
v.push_back({1,5});
v.push_back({2,3});
v.push_back({1,2});
sort(v.begin(), v.end());
\end{lstlisting}
2016-12-30 00:15:51 +01:00
After this, the order of the pairs is
$(1,2)$, $(1,5)$ and $(2,3)$.
2016-12-28 23:54:51 +01:00
\index{tuple@\texttt{tuple}}
2016-12-30 00:15:51 +01:00
2017-02-13 20:42:16 +01:00
In a similar way, tuples (\texttt{tuple})
2016-12-30 00:15:51 +01:00
are sorted primarily by the first element,
secondarily by the second element, etc.:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
vector<tuple<int,int,int>> v;
v.push_back(make_tuple(2,1,4));
v.push_back(make_tuple(1,5,3));
v.push_back(make_tuple(2,1,3));
sort(v.begin(), v.end());
\end{lstlisting}
2016-12-30 00:15:51 +01:00
After this, the order of the tuples is
$(1,5,3)$, $(2,1,3)$ and $(2,1,4)$.
\subsubsection{User-defined structs}
User-defined structs do not have a comparison
operator automatically.
The operator should be defined inside
the struct as a function
\texttt{operator<}
whose parameter is another element of the same type.
The operator should return \texttt{true}
if the element is smaller than the parameter,
and \texttt{false} otherwise.
For example, the following struct \texttt{P}
contains the x and y coordinate of a point.
The comparison operator is defined so that
the points are sorted primarily by the x coordinate
and secondarily by the y coordinate.
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
struct P {
int x, y;
bool operator<(const P &p) {
if (x != p.x) return x < p.x;
else return y < p.y;
}
};
\end{lstlisting}
2017-02-13 20:42:16 +01:00
\subsubsection{Comparison functions}
2016-12-28 23:54:51 +01:00
2016-12-30 00:15:51 +01:00
\index{comparison function}
2016-12-28 23:54:51 +01:00
2016-12-30 00:15:51 +01:00
It is also possible to give an external
\key{comparison function} to the \texttt{sort} function
as a callback function.
For example, the following comparison function
sorts strings primarily by length and secondarily
by alphabetical order:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
bool cmp(string a, string b) {
if (a.size() != b.size()) return a.size() < b.size();
return a < b;
}
\end{lstlisting}
2016-12-30 00:15:51 +01:00
Now a vector of strings can be sorted as follows:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
sort(v.begin(), v.end(), cmp);
\end{lstlisting}
2016-12-30 00:15:51 +01:00
\section{Binary search}
2016-12-28 23:54:51 +01:00
2016-12-30 00:15:51 +01:00
\index{binary search}
2016-12-28 23:54:51 +01:00
2016-12-31 12:19:22 +01:00
A general method for searching for an element
in an array is to use a \texttt{for} loop
2017-01-30 21:37:37 +01:00
that iterates through the elements in the array.
2016-12-31 12:19:22 +01:00
For example, the following code searches for
2017-01-30 21:37:37 +01:00
an element $x$ in the array \texttt{t}:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
2017-02-27 20:29:32 +01:00
if (t[i] == x) {} // x found at index i
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2017-01-30 21:37:37 +01:00
The time complexity of this approach is $O(n)$,
because in the worst case, it is needed to check
2016-12-31 12:19:22 +01:00
all elements in the array.
2017-02-13 20:42:16 +01:00
If the array may contain any elements,
2017-01-30 21:37:37 +01:00
this is also the best possible approach, because
2016-12-31 12:19:22 +01:00
there is no additional information available where
in the array we should search for the element $x$.
However, if the array is \emph{sorted},
the situation is different.
In this case it is possible to perform the
search much faster, because the order of the
2017-02-13 20:42:16 +01:00
elements in the array guides the search.
2016-12-31 12:19:22 +01:00
The following \key{binary search} algorithm
efficiently searches for an element in a sorted array
in $O(\log n)$ time.
\subsubsection{Method 1}
The traditional way to implement binary search
resembles looking for a word in a dictionary.
At each step, the search halves the active region in the array,
2017-01-30 21:37:37 +01:00
until the target element is found, or it turns out
2016-12-31 12:19:22 +01:00
that there is no such element.
2017-02-13 20:42:16 +01:00
First, the search checks the middle element of the array.
2017-01-30 21:37:37 +01:00
If the middle element is the target element,
2016-12-31 12:19:22 +01:00
the search terminates.
Otherwise, the search recursively continues
2017-01-30 21:37:37 +01:00
to the left or right half of the array,
2016-12-31 12:19:22 +01:00
depending on the value of the middle element.
The above idea can be implemented as follows:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int a = 1, b = n;
while (a <= b) {
int k = (a+b)/2;
2017-02-27 20:29:32 +01:00
if (t[k] == x) {} // x found at index k
2016-12-28 23:54:51 +01:00
if (t[k] > x) b = k-1;
else a = k+1;
}
\end{lstlisting}
2016-12-31 12:19:22 +01:00
The algorithm maintains a range $a \ldots b$
2017-02-13 20:42:16 +01:00
that corresponds to the active region of the array.
2016-12-31 12:19:22 +01:00
Initially, the range is $1 \ldots n$, the whole array.
The algorithm halves the size of the range at each step,
so the time complexity is $O(\log n)$.
2016-12-28 23:54:51 +01:00
2016-12-31 12:19:22 +01:00
\subsubsection{Method 2}
2016-12-28 23:54:51 +01:00
2016-12-31 12:19:22 +01:00
An alternative method for implementing binary search
2017-01-30 21:37:37 +01:00
is based on an efficient way to iterate through
2016-12-31 12:19:22 +01:00
the elements in the array.
The idea is to make jumps and slow the speed
2017-01-30 21:37:37 +01:00
when we get closer to the target element.
2016-12-31 12:19:22 +01:00
2017-01-30 21:37:37 +01:00
The search goes through the array from left to
right, and the initial jump length is $n/2$.
2016-12-31 12:19:22 +01:00
At each step, the jump length will be halved:
first $n/4$, then $n/8$, $n/16$, etc., until
finally the length is 1.
2017-01-30 21:37:37 +01:00
After the jumps, either the target element has
been found or we know that it does not appear in the array.
2016-12-31 12:19:22 +01:00
The following code implements the above idea:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int k = 1;
for (int b = n/2; b >= 1; b /= 2) {
while (k+b <= n && t[k+b] <= x) k += b;
}
2017-02-27 20:29:32 +01:00
if (t[k] == x) {} // x was found at index k
2016-12-28 23:54:51 +01:00
\end{lstlisting}
2017-01-30 21:37:37 +01:00
The variables $k$ and $b$ contain the position
in the array and the jump length.
2016-12-31 12:19:22 +01:00
If the array contains the element $x$,
2017-02-13 20:42:16 +01:00
the position of $x$ will be in the variable $k$
2016-12-31 12:19:22 +01:00
after the search.
The time complexity of the algorithm is $O(\log n)$,
because the code in the \texttt{while} loop
is performed at most twice for each jump length.
\subsubsection{Finding the smallest solution}
In practice, it is seldom needed to implement
2017-01-30 21:37:37 +01:00
binary search for searching elements in an array,
2017-02-13 20:42:16 +01:00
because we can use the standard library.
2016-12-31 12:19:22 +01:00
For example, the C++ functions \texttt{lower\_bound}
and \texttt{upper\_bound} implement binary search,
and the data structure \texttt{set} maintains a
set of elements with $O(\log n)$ time operations.
However, an important use for binary search is
2017-01-30 21:37:37 +01:00
to find the position where the value of a function changes.
2016-12-31 12:19:22 +01:00
Suppose that we wish to find the smallest value $k$
that is a valid solution for a problem.
We are given a function $\texttt{ok}(x)$
that returns \texttt{true} if $x$ is a valid solution
and \texttt{false} otherwise.
In addition, we know that $\texttt{ok}(x)$ is \texttt{false}
when $x<k$ and \texttt{true} when $x \ge k$.
The situation looks as follows:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{r|rrrrrrrr}
$x$ & 0 & 1 & $\cdots$ & $k-1$ & $k$ & $k+1$ & $\cdots$ \\
\hline
$\texttt{ok}(x)$ & \texttt{false} & \texttt{false}
& $\cdots$ & \texttt{false} & \texttt{true} & \texttt{true} & $\cdots$ \\
\end{tabular}
\end{center}
\noindent
2017-02-27 20:29:32 +01:00
Now, the value of $k$ can be found using binary search:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int x = -1;
for (int b = z; b >= 1; b /= 2) {
while (!ok(x+b)) x += b;
}
int k = x+1;
\end{lstlisting}
2016-12-31 12:19:22 +01:00
The search finds the largest value of $x$ for which
$\texttt{ok}(x)$ is \texttt{false}.
Thus, the next value $k=x+1$
is the smallest possible value for which
$\texttt{ok}(k)$ is \texttt{true}.
The initial jump length $z$ has to be
large enough, for example some value
for which we know beforehand that $\texttt{ok}(z)$ is \texttt{true}.
The algorithm calls the function \texttt{ok}
$O(\log z)$ times, so the total time complexity
depends on the function \texttt{ok}.
For example, if the function works in $O(n)$ time,
2017-01-30 21:37:37 +01:00
the total time complexity is $O(n \log z)$.
2016-12-31 12:19:22 +01:00
\subsubsection{Finding the maximum value}
2017-01-30 21:37:37 +01:00
Binary search can also be used to find
2016-12-31 12:19:22 +01:00
the maximum value for a function that is
first increasing and then decreasing.
2017-02-27 20:29:32 +01:00
Our task is to find a position $k$ such that
2016-12-28 23:54:51 +01:00
\begin{itemize}
\item
2016-12-31 12:19:22 +01:00
$f(x)<f(x+1)$ when $x<k$, and
2016-12-28 23:54:51 +01:00
\item
2017-01-30 21:37:37 +01:00
$f(x)>f(x+1)$ when $x \ge k$.
2016-12-28 23:54:51 +01:00
\end{itemize}
2016-12-31 12:19:22 +01:00
The idea is to use binary search
for finding the largest value of $x$
for which $f(x)<f(x+1)$.
This implies that $k=x+1$
because $f(x+1)>f(x+2)$.
The following code implements the search:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int x = -1;
for (int b = z; b >= 1; b /= 2) {
while (f(x+b) < f(x+b+1)) x += b;
}
int k = x+1;
\end{lstlisting}
2017-02-13 20:42:16 +01:00
Note that unlike in the ordinary binary search,
2017-01-30 21:37:37 +01:00
here it is not allowed that consecutive values
2016-12-31 12:19:22 +01:00
of the function are equal.
In this case it would not be possible to know
how to continue the search.