cphb/chapter02.tex

547 lines
15 KiB
TeX
Raw Normal View History

2016-12-28 23:54:51 +01:00
\chapter{Time complexity}
2016-12-29 18:59:39 +01:00
\index{time complexity}
The efficiency of algorithms is important in competitive programming.
Usually, it is easy to design an algorithm
that solves the problem slowly,
but the real challenge is to invent a
fast algorithm.
2017-01-30 20:35:32 +01:00
If the algorithm is too slow, it will get only
2016-12-29 18:59:39 +01:00
partial points or no points at all.
The \key{time complexity} of an algorithm
estimates how much time the algorithm will use
for some input.
The idea is to represent the efficiency
as an function whose parameter is the size of the input.
By calculating the time complexity,
2017-01-30 20:35:32 +01:00
we can find out whether the algorithm is good enough
2016-12-29 18:59:39 +01:00
without implementing it.
\section{Calculation rules}
The time complexity of an algorithm
is denoted $O(\cdots)$
where the three dots represent some
function.
Usually, the variable $n$ denotes
the input size.
For example, if the input is an array of numbers,
$n$ will be the size of the array,
and if the input is a string,
$n$ will be the length of the string.
\subsubsection*{Loops}
2017-01-30 20:35:32 +01:00
A common reason why an algorithm is slow is
2016-12-29 18:59:39 +01:00
that it contains many loops that go through the input.
The more nested loops the algorithm contains,
the slower it is.
If there are $k$ nested loops,
the time complexity is $O(n^k)$.
For example, the time complexity of the following code is $O(n)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2017-01-30 20:35:32 +01:00
And the time complexity of the following code is $O(n^2)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Order of magnitude}
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
A time complexity does not tell us the exact number
2016-12-29 18:59:39 +01:00
of times the code inside a loop is executed,
2017-01-30 20:35:32 +01:00
but it only shows the order of magnitude.
2016-12-29 18:59:39 +01:00
In the following examples, the code inside the loop
is executed $3n$, $n+5$ and $\lceil n/2 \rceil$ times,
but the time complexity of each code is $O(n)$.
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= 3*n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
\begin{lstlisting}
for (int i = 1; i <= n+5; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
\begin{lstlisting}
for (int i = 1; i <= n; i += 2) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
As another example,
the time complexity of the following code is $O(n^2)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Phases}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
If the code consists of consecutive phases,
the total time complexity is the largest
time complexity of a single phase.
The reason for this is that the slowest
2017-01-30 20:35:32 +01:00
phase is usually the bottleneck of the code.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, the following code consists
of three phases with time complexities
$O(n)$, $O(n^2)$ and $O(n)$.
Thus, the total time complexity is $O(n^2)$.
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
for (int i = 1; i <= n; i++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Several variables}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
Sometimes the time complexity depends on
2017-01-30 20:35:32 +01:00
several factors.
In this case, the time complexity formula
2016-12-29 18:59:39 +01:00
contains several variables.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, the time complexity of the
following code is $O(nm)$:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
2016-12-29 18:59:39 +01:00
// code
2016-12-28 23:54:51 +01:00
}
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
\subsubsection*{Recursion}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
The time complexity of a recursive function
depends on the number of times the function is called
and the time complexity of a single call.
The total time complexity is the product of
these values.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
For example, consider the following function:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
void f(int n) {
if (n == 1) return;
f(n-1);
}
\end{lstlisting}
2016-12-29 18:59:39 +01:00
The call $\texttt{f}(n)$ causes $n$ function calls,
and the time complexity of each call is $O(1)$.
Thus, the total time complexity is $O(n)$.
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
As another example, consider the following function:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
void g(int n) {
if (n == 1) return;
g(n-1);
g(n-1);
}
\end{lstlisting}
2017-01-30 20:35:32 +01:00
In this case each function call generates two other
calls, except for $n=1$.
Hence, the call $\texttt{g}(n)$ causes the following calls:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{rr}
2017-01-30 20:35:32 +01:00
parameter & number of calls \\
2016-12-28 23:54:51 +01:00
\hline
$\texttt{g}(n)$ & 1 \\
$\texttt{g}(n-1)$ & 2 \\
$\cdots$ & $\cdots$ \\
$\texttt{g}(1)$ & $2^{n-1}$ \\
\end{tabular}
\end{center}
2016-12-29 18:59:39 +01:00
Based on this, the time complexity is
2016-12-28 23:54:51 +01:00
\[1+2+4+\cdots+2^{n-1} = 2^n-1 = O(2^n).\]
2016-12-29 18:59:39 +01:00
\section{Complexity classes}
2016-12-28 23:54:51 +01:00
2016-12-29 18:59:39 +01:00
\index{complexity classes}
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
The following list contains common time complexities
of algorithms:
2016-12-28 23:54:51 +01:00
\begin{description}
\item[$O(1)$]
2016-12-29 18:59:39 +01:00
\index{constant-time algorithm}
The running time of a \key{constant-time} algorithm
2017-01-30 20:35:32 +01:00
does not depend on the input size.
2016-12-29 18:59:39 +01:00
A typical constant-time algorithm is a direct
formula that calculates the answer.
2016-12-28 23:54:51 +01:00
\item[$O(\log n)$]
2016-12-29 18:59:39 +01:00
\index{logarithmic algorithm}
A \key{logarithmic} algorithm often halves
the input size at each step.
2017-01-30 20:35:32 +01:00
The running time of such an algorithm
is logarithmic, because
2016-12-29 18:59:39 +01:00
$\log_2 n$ equals the number of times
2017-01-30 20:35:32 +01:00
$n$ must be divided by 2 to get 1.
2016-12-28 23:54:51 +01:00
\item[$O(\sqrt n)$]
2017-01-30 20:35:32 +01:00
A \key{square root algorithm} is slower than
$O(\log n)$ but faster than $O(n)$.
2017-02-13 20:42:16 +01:00
A special property of square roots is that
2017-01-30 20:35:32 +01:00
$\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies
in some sense in the middle of the input.
2016-12-28 23:54:51 +01:00
\item[$O(n)$]
2016-12-29 18:59:39 +01:00
\index{linear algorithm}
A \key{linear} algorithm goes through the input
a constant number of times.
2017-01-30 20:35:32 +01:00
This is often the best possible time complexity,
2016-12-29 18:59:39 +01:00
because it is usually needed to access each
input element at least once before
reporting the answer.
2016-12-28 23:54:51 +01:00
\item[$O(n \log n)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that the
2017-02-13 20:42:16 +01:00
algorithm sorts the input,
2016-12-29 18:59:39 +01:00
because the time complexity of efficient
sorting algorithms is $O(n \log n)$.
Another possibility is that the algorithm
2017-01-30 20:35:32 +01:00
uses a data structure where each operation
takes $O(\log n)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(n^2)$]
2016-12-29 18:59:39 +01:00
\index{quadratic algorithm}
A \key{quadratic} algorithm often contains
two nested loops.
It is possible to go through all pairs of
2017-02-13 20:42:16 +01:00
the input elements in $O(n^2)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(n^3)$]
2016-12-29 18:59:39 +01:00
\index{cubic algorithm}
A \key{cubic} algorithm often contains
three nested loops.
It is possible to go through all triplets of
2017-02-13 20:42:16 +01:00
the input elements in $O(n^3)$ time.
2016-12-28 23:54:51 +01:00
\item[$O(2^n)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that
2016-12-29 18:59:39 +01:00
the algorithm iterates through all
subsets of the input elements.
For example, the subsets of $\{1,2,3\}$ are
2016-12-28 23:54:51 +01:00
$\emptyset$, $\{1\}$, $\{2\}$, $\{3\}$, $\{1,2\}$,
2016-12-29 18:59:39 +01:00
$\{1,3\}$, $\{2,3\}$ and $\{1,2,3\}$.
2016-12-28 23:54:51 +01:00
\item[$O(n!)$]
2017-01-30 20:35:32 +01:00
This time complexity often indicates that
the algorithm iterates through all
2016-12-29 18:59:39 +01:00
permutations of the input elements.
For example, the permutations of $\{1,2,3\}$ are
2016-12-28 23:54:51 +01:00
$(1,2,3)$, $(1,3,2)$, $(2,1,3)$, $(2,3,1)$,
2016-12-29 18:59:39 +01:00
$(3,1,2)$ and $(3,2,1)$.
2016-12-28 23:54:51 +01:00
\end{description}
2016-12-29 18:59:39 +01:00
\index{polynomial algorithm}
An algorithm is \key{polynomial}
if its time complexity is at most $O(n^k)$
where $k$ is a constant.
All the above time complexities except
$O(2^n)$ and $O(n!)$ are polynomial.
In practice, the constant $k$ is usually small,
and therefore a polynomial time complexity
2016-12-29 19:51:57 +01:00
roughly means that the algorithm is \emph{efficient}.
2016-12-29 18:59:39 +01:00
\index{NP-hard problem}
Most algorithms in this book are polynomial.
Still, there are many important problems for which
no polynomial algorithm is known, i.e.,
2016-12-29 19:51:57 +01:00
nobody knows how to solve them efficiently.
2016-12-29 18:59:39 +01:00
\key{NP-hard} problems are an important set
2017-02-21 00:17:36 +01:00
of problems for which no polynomial algorithm is known \cite{gar79}.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
\section{Estimating efficiency}
2017-02-13 20:42:16 +01:00
By calculating the time complexity of an algorithm,
2017-01-30 20:35:32 +01:00
it is possible to check before
2017-02-13 20:42:16 +01:00
implementing the algorithm that it is
2017-01-30 20:35:32 +01:00
efficient enough for the problem.
The starting point for estimations is the fact that
2016-12-29 19:51:57 +01:00
a modern computer can perform some hundreds of
millions of operations in a second.
For example, assume that the time limit for
a problem is one second and the input size is $n=10^5$.
If the time complexity is $O(n^2)$,
the algorithm will perform about $(10^5)^2=10^{10}$ operations.
2017-02-13 20:42:16 +01:00
This should take at least some tens of seconds,
2016-12-29 19:51:57 +01:00
so the algorithm seems to be too slow for solving the problem.
On the other hand, given the input size,
we can try to guess
2017-01-30 20:35:32 +01:00
the required time complexity of the algorithm
2016-12-29 19:51:57 +01:00
that solves the problem.
The following table contains some useful estimates
assuming that the time limit is one second.
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{ll}
2017-02-25 11:39:37 +01:00
typical input size & required time complexity \\
2016-12-28 23:54:51 +01:00
\hline
2017-02-25 11:39:37 +01:00
$n \le 10$ & $O(n!)$ \\
$n \le 20$ & $O(2^n)$ \\
$n \le 500$ & $O(n^3)$ \\
$n \le 5000$ & $O(n^2)$ \\
$n \le 10^6$ & $O(n \log n)$ or $O(n)$ \\
2017-02-23 23:38:24 +01:00
$n$ is large & $O(1)$ or $O(\log n)$ \\
2016-12-28 23:54:51 +01:00
\end{tabular}
\end{center}
2016-12-29 19:51:57 +01:00
For example, if the input size is $n=10^5$,
it is probably expected that the time
2017-01-30 20:35:32 +01:00
complexity of the algorithm is $O(n)$ or $O(n \log n)$.
This information makes it easier to design the algorithm,
2016-12-29 19:51:57 +01:00
because it rules out approaches that would yield
2017-02-13 20:42:16 +01:00
an algorithm with a worse time complexity.
2016-12-29 19:51:57 +01:00
\index{constant factor}
Still, it is important to remember that a
2017-01-30 20:35:32 +01:00
time complexity is only an estimate of efficiency,
because it hides the \key{constant factors}.
2016-12-29 19:51:57 +01:00
For example, an algorithm that runs in $O(n)$ time
2017-01-30 20:35:32 +01:00
may perform $n/2$ or $5n$ operations.
2016-12-29 19:51:57 +01:00
This has an important effect on the actual
running time of the algorithm.
\section{Maximum subarray sum}
\index{maximum subarray sum}
There are often several possible algorithms
2017-01-30 20:35:32 +01:00
for solving a problem such that their
time complexities are different.
2016-12-29 19:51:57 +01:00
This section discusses a classic problem that
has a straightforward $O(n^3)$ solution.
However, by designing a better algorithm it
is possible to solve the problem in $O(n^2)$
time and even in $O(n)$ time.
Given an array of $n$ integers $x_1,x_2,\ldots,x_n$,
our task is to find the
2017-02-21 17:10:08 +01:00
\key{maximum subarray sum}\footnote{Bentley's
book \emph{Programming Pearls} \cite{ben86} made this problem popular.}, i.e.,
2016-12-29 19:51:57 +01:00
the largest possible sum of numbers
2017-02-21 17:10:08 +01:00
in a contiguous region in the array.
2017-01-30 20:35:32 +01:00
The problem is interesting when there may be
2016-12-29 19:51:57 +01:00
negative numbers in the array.
For example, in the array
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$-1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$4$};
\node at (3.5,0.5) {$-3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$-5$};
\node at (7.5,0.5) {$2$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\begin{samepage}
2016-12-29 19:51:57 +01:00
the following subarray produces the maximum sum $10$:
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tikzpicture}[scale=0.7]
\fill[color=lightgray] (1,0) rectangle (6,1);
\draw (0,0) grid (8,1);
\node at (0.5,0.5) {$-1$};
\node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$4$};
\node at (3.5,0.5) {$-3$};
\node at (4.5,0.5) {$5$};
\node at (5.5,0.5) {$2$};
\node at (6.5,0.5) {$-5$};
\node at (7.5,0.5) {$2$};
\footnotesize
\node at (0.5,1.4) {$1$};
\node at (1.5,1.4) {$2$};
\node at (2.5,1.4) {$3$};
\node at (3.5,1.4) {$4$};
\node at (4.5,1.4) {$5$};
\node at (5.5,1.4) {$6$};
\node at (6.5,1.4) {$7$};
\node at (7.5,1.4) {$8$};
\end{tikzpicture}
\end{center}
\end{samepage}
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 1}
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
A straightforward algorithm to the problem
2016-12-29 19:51:57 +01:00
is to go through all possible ways to
select a subarray, calculate the sum of
numbers in each subarray and maintain
the maximum sum.
The following code implements this algorithm:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int p = 0;
for (int a = 1; a <= n; a++) {
for (int b = a; b <= n; b++) {
int s = 0;
for (int c = a; c <= b; c++) {
s += x[c];
}
p = max(p,s);
}
}
cout << p << "\n";
\end{lstlisting}
2017-01-30 20:35:32 +01:00
The code assumes that the numbers are stored in an array \texttt{x}
2016-12-29 19:51:57 +01:00
with indices $1 \ldots n$.
2017-02-13 20:42:16 +01:00
The variables $a$ and $b$ determine the first and last
2016-12-29 19:51:57 +01:00
number in the subarray,
2017-02-13 20:42:16 +01:00
and the sum of the numbers is calculated to the variable $s$.
2017-01-30 20:35:32 +01:00
The variable $p$ contains the maximum sum found during the search.
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
The time complexity of the algorithm is $O(n^3)$,
2017-02-25 11:41:07 +01:00
because it consists of three nested loops
that go through the input.
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 2}
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
It is easy to make the first algorithm more efficient
2017-01-30 20:35:32 +01:00
by removing one loop from it.
2016-12-29 19:51:57 +01:00
This is possible by calculating the sum at the same
2017-01-30 20:35:32 +01:00
time when the right end of the subarray moves.
2016-12-29 19:51:57 +01:00
The result is the following code:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int p = 0;
for (int a = 1; a <= n; a++) {
int s = 0;
for (int b = a; b <= n; b++) {
s += x[b];
p = max(p,s);
}
}
cout << p << "\n";
\end{lstlisting}
2016-12-29 19:51:57 +01:00
After this change, the time complexity is $O(n^2)$.
2017-02-13 20:42:16 +01:00
\subsubsection{Algorithm 3}
2016-12-29 19:51:57 +01:00
Surprisingly, it is possible to solve the problem
2017-01-30 20:35:32 +01:00
in $O(n)$ time, which means that we can remove
2016-12-29 19:51:57 +01:00
one more loop.
2017-01-30 20:35:32 +01:00
The idea is to calculate for each array position
2017-02-13 20:42:16 +01:00
the maximum sum of a subarray that ends at that position.
2016-12-29 19:51:57 +01:00
After this, the answer for the problem is the
maximum of those sums.
2017-02-23 10:25:05 +01:00
Consider the subproblem of finding the maximum-sum subarray
2017-01-30 20:35:32 +01:00
that ends at position $k$.
2016-12-29 19:51:57 +01:00
There are two possibilities:
2016-12-28 23:54:51 +01:00
\begin{enumerate}
2017-01-30 20:35:32 +01:00
\item The subarray only contains the element at position $k$.
2016-12-29 19:51:57 +01:00
\item The subarray consists of a subarray that ends
2017-01-30 20:35:32 +01:00
at position $k-1$, followed by the element at position $k$.
2016-12-28 23:54:51 +01:00
\end{enumerate}
2016-12-29 19:51:57 +01:00
Our goal is to find a subarray with maximum sum,
2017-02-13 20:42:16 +01:00
so in case 2 the subarray that ends at position $k-1$
2016-12-29 19:51:57 +01:00
should also have the maximum sum.
Thus, we can solve the problem efficiently
when we calculate the maximum subarray sum
2017-01-30 20:35:32 +01:00
for each ending position from left to right.
2016-12-28 23:54:51 +01:00
2017-02-13 20:42:16 +01:00
The following code implements the algorithm:
2016-12-28 23:54:51 +01:00
\begin{lstlisting}
int p = 0, s = 0;
for (int k = 1; k <= n; k++) {
s = max(x[k],s+x[k]);
p = max(p,s);
}
cout << p << "\n";
\end{lstlisting}
2016-12-29 19:51:57 +01:00
The algorithm only contains one loop
that goes through the input,
so the time complexity is $O(n)$.
This is also the best possible time complexity,
because any algorithm for the problem
2017-02-13 20:42:16 +01:00
has to examine all array elements at least once.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
\subsubsection{Efficiency comparison}
2016-12-28 23:54:51 +01:00
2017-01-30 20:35:32 +01:00
It is interesting to study how efficient
2016-12-29 19:51:57 +01:00
algorithms are in practice.
The following table shows the running times
of the above algorithms for different
values of $n$ in a modern computer.
2016-12-28 23:54:51 +01:00
2016-12-29 19:51:57 +01:00
In each test, the input was generated randomly.
The time needed for reading the input was not
measured.
2016-12-28 23:54:51 +01:00
\begin{center}
\begin{tabular}{rrrr}
2017-02-13 20:42:16 +01:00
array size $n$ & algorithm 1 & algorithm 2 & algorithm 3 \\
2016-12-28 23:54:51 +01:00
\hline
$10^2$ & $0{,}0$ s & $0{,}0$ s & $0{,}0$ s \\
$10^3$ & $0{,}1$ s & $0{,}0$ s & $0{,}0$ s \\
$10^4$ & > $10,0$ s & $0{,}1$ s & $0{,}0$ s \\
$10^5$ & > $10,0$ s & $5{,}3$ s & $0{,}0$ s \\
$10^6$ & > $10,0$ s & > $10,0$ s & $0{,}0$ s \\
$10^7$ & > $10,0$ s & > $10,0$ s & $0{,}0$ s \\
\end{tabular}
\end{center}
2016-12-29 19:51:57 +01:00
The comparison shows that all algorithms
are efficient when the input size is small,
but larger inputs bring out remarkable
differences in running times of the algorithms.
2017-02-13 20:42:16 +01:00
The $O(n^3)$ time algorithm 1 becomes slow
when $n=10^4$, and the $O(n^2)$ time algorithm 2
2017-01-30 20:35:32 +01:00
becomes slow when $n=10^5$.
2017-02-13 20:42:16 +01:00
Only the $O(n)$ time algorithm 3 processes
2016-12-29 19:51:57 +01:00
even the largest inputs instantly.