cphb/chapter02.tex

539 lines
15 KiB
TeX
Raw Normal View History

2016-12-28 23:54:51 +01:00
\chapter{Time complexity}
2016-12-29 18:59:39 +01:00
\index{time complexity}
The efficiency of algorithms is important in competitive programming.
Usually, it is easy to design an algorithm
that solves the problem slowly,
but the real challenge is to invent a
fast algorithm.
2017-01-30 20:35:32 +01:00
If the algorithm is too slow, it will get only
2016-12-29 18:59:39 +01:00
partial points or no points at all.
The \key{time complexity} of an algorithm
estimates how much time the algorithm will use
for some input.
The idea is to represent the efficiency
2017-12-10 11:06:32 +01:00
as a function whose parameter is the size of the input.
2016-12-29 18:59:39 +01:00
By calculating the time complexity,
we can find out whether the algorithm is fast enough
2016-12-29 18:59:39 +01:00
without implementing it.
\section{Calculation rules}
The time complexity of an algorithm
is denoted $O(\cdots)$
where the three dots represent some
function.
Usually, the variable $n$ denotes
the input size.
For example, if the input is an array of numbers,
$n$ will be the size of the array,
and if the input is a string,
$n$ will be the length of the string.
\subsubsection*{Loops}
2017-01-30 20:35:32 +01:00
A common reason why an algorithm is slow is
2016-12-29 18:59:39 +01:00
that it contains many loops that go through the input.
The more nested loops the algorithm contains,
the slower it is.
If there are $k$ nested loops,
the time complexity is $O(n^k)$.
For example, the time complexity of the following code is $O(n)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2017-01-30 20:35:32 +01:00
And the time complexity of the following code is $O(n^2)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Order of magnitude}
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
A time complexity does not tell us the exact number
2016-12-29 18:59:39 +01:00
of times the code inside a loop is executed,
2017-01-30 20:35:32 +01:00
but it only shows the order of magnitude.
2016-12-29 18:59:39 +01:00
In the following examples, the code inside the loop
is executed $3n$, $n+5$ and $\lceil n/2 \rceil$ times,
but the time complexity of each code is $O(n)$.
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= 3*n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
\begin{lstlisting}
for (int i = 1; i <= n+5; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
\begin{lstlisting}
for (int i = 1; i <= n; i += 2) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
As another example,
the time complexity of the following code is $O(n^2)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Phases}
2016-12-28 23:54:51 +01:00
If the algorithm consists of consecutive phases,
2016-12-29 18:59:39 +01:00
the total time complexity is the largest
time complexity of a single phase.
The reason for this is that the slowest
2017-01-30 20:35:32 +01:00
phase is usually the bottleneck of the code.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, the following code consists
of three phases with time complexities
$O(n)$, $O(n^2)$ and $O(n)$.
Thus, the total time complexity is $O(n^2)$.
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Several variables}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
Sometimes the time complexity depends on
2017-01-30 20:35:32 +01:00
several factors.
In this case, the time complexity formula
2016-12-29 18:59:39 +01:00
contains several variables.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, the time complexity of the
following code is $O(nm)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Recursion}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
The time complexity of a recursive function
depends on the number of times the function is called
and the time complexity of a single call.
The total time complexity is the product of
these values.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, consider the following function:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
void f(int n) {
if (n == 1) return;
f(n-1);
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
The call $\texttt{f}(n)$ causes $n$ function calls,
and the time complexity of each call is $O(1)$.
Thus, the total time complexity is $O(n)$.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
As another example, consider the following function:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
void g(int n) {
if (n == 1) return;
g(n-1);
g(n-1);
}
\end{lstlisting}
2017-01-30 20:35:32 +01:00
In this case each function call generates two other
calls, except for $n=1$.
2017-05-25 15:26:22 +02:00
Let us see what happens when $g$ is called
with parameter $n$.
The following table shows the function calls
2017-05-25 22:34:59 +02:00
produced by this single call:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{rr}
2017-05-25 15:26:22 +02:00
function call & number of calls \\
2016-12-28 23:54:51 +01:00
\hline
2017-05-25 15:26:22 +02:00
$g(n)$ & 1 \\
$g(n-1)$ & 2 \\
$g(n-2)$ & 4 \\
2016-12-28 23:54:51 +01:00
$\cdots$ & $\cdots$ \\
2017-05-25 15:26:22 +02:00
$g(1)$ & $2^{n-1}$ \\
2016-12-28 23:54:51 +01:00
\end{tabular}
\end{center}
2016-12-29 18:59:39 +01:00
Based on this, the time complexity is
2016-12-28 23:54:51 +01:00
\[1+2+4+\cdots+2^{n-1} = 2^n-1 = O(2^n).\]
2016-12-29 18:59:39 +01:00
\section{Complexity classes}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
\index{complexity classes}
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
The following list contains common time complexities
of algorithms:
2016-12-28 23:54:51 +01:00
\begin{description}
\item[$O(1)$]
2016-12-29 18:59:39 +01:00
\index{constant-time algorithm}
The running time of a \key{constant-time} algorithm
2017-01-30 20:35:32 +01:00
does not depend on the input size.
2016-12-29 18:59:39 +01:00
A typical constant-time algorithm is a direct
formula that calculates the answer.
2016-12-28 23:54:51 +01:00
\item[$O(\log n)$]
2016-12-29 18:59:39 +01:00
\index{logarithmic algorithm}
A \key{logarithmic} algorithm often halves
the input size at each step.
2017-01-30 20:35:32 +01:00
The running time of such an algorithm
is logarithmic, because
2016-12-29 18:59:39 +01:00
$\log_2 n$ equals the number of times
2017-01-30 20:35:32 +01:00
$n$ must be divided by 2 to get 1.
2016-12-28 23:54:51 +01:00
\item[$O(\sqrt n)$]
2017-01-30 20:35:32 +01:00
A \key{square root algorithm} is slower than
$O(\log n)$ but faster than $O(n)$.
2017-02-13 20:42:16 +01:00
A special property of square roots is that
$\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies,
in some sense, in the middle of the input.
2016-12-28 23:54:51 +01:00
\item[$O(n)$]
2016-12-29 18:59:39 +01:00
\index{linear algorithm}
A \key{linear} algorithm goes through the input
a constant number of times.
2017-01-30 20:35:32 +01:00
This is often the best possible time complexity,
because it is usually necessary to access each
2016-12-29 18:59:39 +01:00
input element at least once before
reporting the answer.
2016-12-28 23:54:51 +01:00
\item[$O(n \log n)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that the
2017-02-13 20:42:16 +01:00
algorithm sorts the input,
2016-12-29 18:59:39 +01:00
because the time complexity of efficient
sorting algorithms is $O(n \log n)$.
Another possibility is that the algorithm
2017-01-30 20:35:32 +01:00
uses a data structure where each operation
takes $O(\log n)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(n^2)$]
2016-12-29 18:59:39 +01:00
\index{quadratic algorithm}
A \key{quadratic} algorithm often contains
two nested loops.
It is possible to go through all pairs of
2017-02-13 20:42:16 +01:00
the input elements in $O(n^2)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(n^3)$]
2016-12-29 18:59:39 +01:00
\index{cubic algorithm}
A \key{cubic} algorithm often contains
three nested loops.
It is possible to go through all triplets of
2017-02-13 20:42:16 +01:00
the input elements in $O(n^3)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(2^n)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that
2016-12-29 18:59:39 +01:00
the algorithm iterates through all
subsets of the input elements.
For example, the subsets of $\{1,2,3\}$ are
2016-12-28 23:54:51 +01:00
$\emptyset$, $\{1\}$, $\{2\}$, $\{3\}$, $\{1,2\}$,
2016-12-29 18:59:39 +01:00
$\{1,3\}$, $\{2,3\}$ and $\{1,2,3\}$.
2016-12-28 23:54:51 +01:00
\item[$O(n!)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that
the algorithm iterates through all
2016-12-29 18:59:39 +01:00
permutations of the input elements.
For example, the permutations of $\{1,2,3\}$ are
2016-12-28 23:54:51 +01:00
$(1,2,3)$, $(1,3,2)$, $(2,1,3)$, $(2,3,1)$,
2016-12-29 18:59:39 +01:00
$(3,1,2)$ and $(3,2,1)$.
2016-12-28 23:54:51 +01:00
\end{description}
2016-12-29 18:59:39 +01:00
\index{polynomial algorithm}
An algorithm is \key{polynomial}
if its time complexity is at most $O(n^k)$
where $k$ is a constant.
All the above time complexities except
$O(2^n)$ and $O(n!)$ are polynomial.
In practice, the constant $k$ is usually small,
and therefore a polynomial time complexity
2016-12-29 19:51:57 +01:00
roughly means that the algorithm is \emph{efficient}.
2016-12-29 18:59:39 +01:00
\index{NP-hard problem}
Most algorithms in this book are polynomial.
Still, there are many important problems for which
no polynomial algorithm is known, i.e.,
2016-12-29 19:51:57 +01:00
nobody knows how to solve them efficiently.
2016-12-29 18:59:39 +01:00
\key{NP-hard} problems are an important set
of problems, for which no polynomial algorithm
2017-02-26 10:21:04 +01:00
is known\footnote{A classic book on the topic is
2017-02-25 15:51:29 +01:00
M. R. Garey's and D. S. Johnson's
\emph{Computers and Intractability: A Guide to the Theory
of NP-Completeness} \cite{gar79}.}.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
\section{Estimating efficiency}
2017-02-13 20:42:16 +01:00
By calculating the time complexity of an algorithm,
it is possible to check, before
implementing the algorithm, that it is
2017-01-30 20:35:32 +01:00
efficient enough for the problem.
The starting point for estimations is the fact that
2016-12-29 19:51:57 +01:00
a modern computer can perform some hundreds of
millions of operations in a second.
For example, assume that the time limit for
a problem is one second and the input size is $n=10^5$.
If the time complexity is $O(n^2)$,
the algorithm will perform about $(10^5)^2=10^{10}$ operations.
2017-02-13 20:42:16 +01:00
This should take at least some tens of seconds,
2016-12-29 19:51:57 +01:00
so the algorithm seems to be too slow for solving the problem.
On the other hand, given the input size,
2017-05-17 20:38:50 +02:00
we can try to \emph{guess}
2017-01-30 20:35:32 +01:00
the required time complexity of the algorithm
2016-12-29 19:51:57 +01:00
that solves the problem.
The following table contains some useful estimates
assuming a time limit of one second.
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{ll}
2017-02-27 20:29:32 +01:00
input size & required time complexity \\
2016-12-28 23:54:51 +01:00
\hline
$n \le 10$ & $O(n!)$ \\
2017-02-25 11:39:37 +01:00
$n \le 20$ & $O(2^n)$ \\
$n \le 500$ & $O(n^3)$ \\
$n \le 5000$ & $O(n^2)$ \\
$n \le 10^6$ & $O(n \log n)$ or $O(n)$ \\
2017-02-23 23:38:24 +01:00
$n$ is large & $O(1)$ or $O(\log n)$ \\
2016-12-28 23:54:51 +01:00
\end{tabular}
\end{center}
2016-12-29 19:51:57 +01:00
For example, if the input size is $n=10^5$,
2017-05-17 20:38:50 +02:00
it is probably expected that the time
2017-01-30 20:35:32 +01:00
complexity of the algorithm is $O(n)$ or $O(n \log n)$.
This information makes it easier to design the algorithm,
2016-12-29 19:51:57 +01:00
because it rules out approaches that would yield
2017-02-13 20:42:16 +01:00
an algorithm with a worse time complexity.
2016-12-29 19:51:57 +01:00
\index{constant factor}
Still, it is important to remember that a
2017-01-30 20:35:32 +01:00
time complexity is only an estimate of efficiency,
2017-05-17 20:38:50 +02:00
because it hides the \emph{constant factors}.
2016-12-29 19:51:57 +01:00
For example, an algorithm that runs in $O(n)$ time
2017-01-30 20:35:32 +01:00
may perform $n/2$ or $5n$ operations.
2016-12-29 19:51:57 +01:00
This has an important effect on the actual
running time of the algorithm.
\section{Maximum subarray sum}
\index{maximum subarray sum}
There are often several possible algorithms
2017-01-30 20:35:32 +01:00
for solving a problem such that their
time complexities are different.
2016-12-29 19:51:57 +01:00
This section discusses a classic problem that
has a straightforward $O(n^3)$ solution.
However, by designing a better algorithm, it
2016-12-29 19:51:57 +01:00
is possible to solve the problem in $O(n^2)$
time and even in $O(n)$ time.
2017-04-17 16:59:27 +02:00
Given an array of $n$ numbers,
our task is to calculate the
\key{maximum subarray sum}, i.e.,
2017-05-17 20:38:50 +02:00
the largest possible sum of
2017-05-25 22:34:59 +02:00
a sequence of consecutive values
2017-05-17 20:38:50 +02:00
in the array\footnote{J. Bentley's
2017-04-17 16:59:27 +02:00
book \emph{Programming Pearls} \cite{ben86} made the problem popular.}.
2017-01-30 20:35:32 +01:00
The problem is interesting when there may be
2017-05-25 22:34:59 +02:00
negative values in the array.
2016-12-29 19:51:57 +01:00
For example, in the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$-1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$4$};
\node at (3.5,0.5) {$-3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$-5$};
\node at (7.5,0.5) {$2$};
\end{tikzpicture}
\end{center}
\begin{samepage}
2016-12-29 19:51:57 +01:00
the following subarray produces the maximum sum $10$:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\fill[color=lightgray] (1,0) rectangle (6,1);
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$-1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$4$};
\node at (3.5,0.5) {$-3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$-5$};
\node at (7.5,0.5) {$2$};
\end{tikzpicture}
\end{center}
\end{samepage}
We assume that an empty subarray is allowed,
so the maximum subarray sum is always at least $0$.
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 1}
2016-12-28 23:54:51 +01:00
2017-04-17 16:59:27 +02:00
A straightforward way to solve the problem
2017-05-25 22:34:59 +02:00
is to go through all possible subarrays,
calculate the sum of values in each subarray and maintain
2016-12-29 19:51:57 +01:00
the maximum sum.
The following code implements this algorithm:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
2017-05-17 20:38:50 +02:00
int best = 0;
2017-05-28 09:40:54 +02:00
for (int a = 0; a < n; a++) {
for (int b = a; b < n; b++) {
2017-05-17 20:38:50 +02:00
int sum = 0;
2017-05-28 09:40:54 +02:00
for (int k = a; k <= b; k++) {
2017-05-25 22:34:59 +02:00
sum += array[k];
2016-12-28 23:54:51 +01:00
}
2017-05-17 20:38:50 +02:00
best = max(best,sum);
2016-12-28 23:54:51 +01:00
}
}
2017-05-17 20:38:50 +02:00
cout << best << "\n";
2016-12-28 23:54:51 +01:00
\end{lstlisting}
2017-05-28 09:40:54 +02:00
The variables \texttt{a} and \texttt{b} fix the first and
last index of the subarray,
and the sum of values is calculated to the variable \texttt{sum}.
2017-05-17 20:38:50 +02:00
The variable \texttt{best} contains the maximum sum found during the search.
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
The time complexity of the algorithm is $O(n^3)$,
2017-02-25 11:41:07 +01:00
because it consists of three nested loops
that go through the input.
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 2}
2016-12-28 23:54:51 +01:00
2017-05-17 20:38:50 +02:00
It is easy to make Algorithm 1 more efficient
2017-01-30 20:35:32 +01:00
by removing one loop from it.
2016-12-29 19:51:57 +01:00
This is possible by calculating the sum at the same
2017-01-30 20:35:32 +01:00
time when the right end of the subarray moves.
2016-12-29 19:51:57 +01:00
The result is the following code:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
2017-05-17 20:38:50 +02:00
int best = 0;
2017-05-28 09:40:54 +02:00
for (int a = 0; a < n; a++) {
2017-05-17 20:38:50 +02:00
int sum = 0;
2017-05-28 09:40:54 +02:00
for (int b = a; b < n; b++) {
sum += array[b];
2017-05-17 20:38:50 +02:00
best = max(best,sum);
2016-12-28 23:54:51 +01:00
}
}
2017-05-17 20:38:50 +02:00
cout << best << "\n";
2016-12-28 23:54:51 +01:00
\end{lstlisting}
2016-12-29 19:51:57 +01:00
After this change, the time complexity is $O(n^2)$.
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 3}
2016-12-29 19:51:57 +01:00
Surprisingly, it is possible to solve the problem
2017-02-26 10:21:04 +01:00
in $O(n)$ time\footnote{In \cite{ben86}, this linear-time algorithm
2017-07-08 18:48:37 +02:00
is attributed to J. B. Kadane, and the algorithm is sometimes
called \index{Kadane's algorithm} \key{Kadane's algorithm}.}, which means
2017-05-17 20:38:50 +02:00
that just one loop is enough.
The idea is to calculate, for each array position,
2017-02-13 20:42:16 +01:00
the maximum sum of a subarray that ends at that position.
2016-12-29 19:51:57 +01:00
After this, the answer for the problem is the
maximum of those sums.
Consider the subproblem of finding the maximum-sum subarray
2017-01-30 20:35:32 +01:00
that ends at position $k$.
2016-12-29 19:51:57 +01:00
There are two possibilities:
2016-12-28 23:54:51 +01:00
\begin{enumerate}
2017-01-30 20:35:32 +01:00
\item The subarray only contains the element at position $k$.
2016-12-29 19:51:57 +01:00
\item The subarray consists of a subarray that ends
2017-01-30 20:35:32 +01:00
at position $k-1$, followed by the element at position $k$.
2016-12-28 23:54:51 +01:00
\end{enumerate}
2017-05-17 20:38:50 +02:00
In the latter case, since we want to
find a subarray with maximum sum,
the subarray that ends at position $k-1$
2016-12-29 19:51:57 +01:00
should also have the maximum sum.
Thus, we can solve the problem efficiently
2017-05-17 20:38:50 +02:00
by calculating the maximum subarray sum
2017-01-30 20:35:32 +01:00
for each ending position from left to right.
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
The following code implements the algorithm:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
2017-05-17 20:38:50 +02:00
int best = 0, sum = 0;
2017-04-17 16:59:27 +02:00
for (int k = 0; k < n; k++) {
2017-05-25 22:34:59 +02:00
sum = max(array[k],sum+array[k]);
2017-05-17 20:38:50 +02:00
best = max(best,sum);
2016-12-28 23:54:51 +01:00
}
2017-05-17 20:38:50 +02:00
cout << best << "\n";
2016-12-28 23:54:51 +01:00
\end{lstlisting}
2016-12-29 19:51:57 +01:00
The algorithm only contains one loop
that goes through the input,
so the time complexity is $O(n)$.
This is also the best possible time complexity,
because any algorithm for the problem
2017-02-13 20:42:16 +01:00
has to examine all array elements at least once.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
\subsubsection{Efficiency comparison}
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
It is interesting to study how efficient
2016-12-29 19:51:57 +01:00
algorithms are in practice.
The following table shows the running times
of the above algorithms for different
values of $n$ on a modern computer.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
In each test, the input was generated randomly.
The time needed for reading the input was not
measured.
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{rrrr}
2017-05-17 20:38:50 +02:00
array size $n$ & Algorithm 1 & Algorithm 2 & Algorithm 3 \\
2016-12-28 23:54:51 +01:00
\hline
2017-05-17 22:32:48 +02:00
$10^2$ & $0.0$ s & $0.0$ s & $0.0$ s \\
$10^3$ & $0.1$ s & $0.0$ s & $0.0$ s \\
$10^4$ & > $10.0$ s & $0.1$ s & $0.0$ s \\
$10^5$ & > $10.0$ s & $5.3$ s & $0.0$ s \\
$10^6$ & > $10.0$ s & > $10.0$ s & $0.0$ s \\
$10^7$ & > $10.0$ s & > $10.0$ s & $0.0$ s \\
2016-12-28 23:54:51 +01:00
\end{tabular}
\end{center}
2016-12-29 19:51:57 +01:00
The comparison shows that all algorithms
are efficient when the input size is small,
but larger inputs bring out remarkable
differences in the running times of the algorithms.
2017-05-17 22:06:21 +02:00
Algorithm 1 becomes slow
when $n=10^4$, and Algorithm 2
2017-01-30 20:35:32 +01:00
becomes slow when $n=10^5$.
2017-05-17 22:06:21 +02:00
Only Algorithm 3 is able to process
2016-12-29 19:51:57 +01:00
even the largest inputs instantly.