2016-12-28 23:54:51 +01:00
|
|
|
\chapter{Square root algorithms}
|
|
|
|
|
2017-01-25 22:13:05 +01:00
|
|
|
\index{square root algorithm}
|
2016-12-28 23:54:51 +01:00
|
|
|
|
2017-01-25 22:13:05 +01:00
|
|
|
A \key{square root algorithm} is an algorithm
|
|
|
|
that has a square root in its time complexity.
|
|
|
|
A square root can be seen as a ''poor man's logarithm'':
|
|
|
|
the complexity $O(\sqrt n)$ is better than $O(n)$
|
|
|
|
but worse than $O(\log n)$.
|
2017-04-22 10:57:37 +02:00
|
|
|
In any case, many square root algorithms are fast and usable in practice.
|
2016-12-28 23:54:51 +01:00
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
As an example, let us consider the problem of
|
|
|
|
creating a data structure that supports
|
2017-02-18 17:46:37 +01:00
|
|
|
two operations on an array:
|
2017-02-11 20:24:28 +01:00
|
|
|
modifying an element at a given position
|
|
|
|
and calculating the sum of elements in the given range.
|
2017-01-25 22:13:05 +01:00
|
|
|
We have previously solved the problem using
|
2017-04-22 10:57:37 +02:00
|
|
|
binary indexed and segment trees,
|
2017-01-25 22:13:05 +01:00
|
|
|
that support both operations in $O(\log n)$ time.
|
|
|
|
However, now we will solve the problem
|
|
|
|
in another way using a square root structure
|
2017-02-11 20:24:28 +01:00
|
|
|
that allows us to modify elements in $O(1)$ time
|
|
|
|
and calculate sums in $O(\sqrt n)$ time.
|
2016-12-28 23:54:51 +01:00
|
|
|
|
2017-04-22 10:57:37 +02:00
|
|
|
The idea is to divide the array into \emph{blocks}
|
2017-01-26 22:12:30 +01:00
|
|
|
of size $\sqrt n$ so that each block contains
|
|
|
|
the sum of elements inside the block.
|
2017-02-11 20:24:28 +01:00
|
|
|
For example, an array of 16 elements will be
|
|
|
|
divided into blocks of 4 elements as follows:
|
2016-12-28 23:54:51 +01:00
|
|
|
|
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\draw (0,0) grid (16,1);
|
|
|
|
|
|
|
|
\draw (0,1) rectangle (4,2);
|
|
|
|
\draw (4,1) rectangle (8,2);
|
|
|
|
\draw (8,1) rectangle (12,2);
|
|
|
|
\draw (12,1) rectangle (16,2);
|
|
|
|
|
|
|
|
\node at (0.5, 0.5) {5};
|
|
|
|
\node at (1.5, 0.5) {8};
|
|
|
|
\node at (2.5, 0.5) {6};
|
|
|
|
\node at (3.5, 0.5) {3};
|
|
|
|
\node at (4.5, 0.5) {2};
|
|
|
|
\node at (5.5, 0.5) {7};
|
|
|
|
\node at (6.5, 0.5) {2};
|
|
|
|
\node at (7.5, 0.5) {6};
|
|
|
|
\node at (8.5, 0.5) {7};
|
|
|
|
\node at (9.5, 0.5) {1};
|
|
|
|
\node at (10.5, 0.5) {7};
|
|
|
|
\node at (11.5, 0.5) {5};
|
|
|
|
\node at (12.5, 0.5) {6};
|
|
|
|
\node at (13.5, 0.5) {2};
|
|
|
|
\node at (14.5, 0.5) {3};
|
|
|
|
\node at (15.5, 0.5) {2};
|
|
|
|
|
|
|
|
\node at (2, 1.5) {21};
|
|
|
|
\node at (6, 1.5) {17};
|
|
|
|
\node at (10, 1.5) {20};
|
|
|
|
\node at (14, 1.5) {13};
|
|
|
|
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
|
|
|
|
2017-04-22 10:57:37 +02:00
|
|
|
In this structure,
|
|
|
|
it is easy to modify array elements,
|
2017-02-18 17:46:37 +01:00
|
|
|
because it is only needed to update
|
|
|
|
the sum of a single block
|
2017-02-11 20:24:28 +01:00
|
|
|
after each modification,
|
|
|
|
which can be done in $O(1)$ time.
|
|
|
|
For example, the following picture shows
|
|
|
|
how the value of an element and
|
|
|
|
the sum of the corresponding block change:
|
2016-12-28 23:54:51 +01:00
|
|
|
|
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\fill[color=lightgray] (5,0) rectangle (6,1);
|
|
|
|
\draw (0,0) grid (16,1);
|
|
|
|
|
|
|
|
\fill[color=lightgray] (4,1) rectangle (8,2);
|
|
|
|
\draw (0,1) rectangle (4,2);
|
|
|
|
\draw (4,1) rectangle (8,2);
|
|
|
|
\draw (8,1) rectangle (12,2);
|
|
|
|
\draw (12,1) rectangle (16,2);
|
|
|
|
|
|
|
|
\node at (0.5, 0.5) {5};
|
|
|
|
\node at (1.5, 0.5) {8};
|
|
|
|
\node at (2.5, 0.5) {6};
|
|
|
|
\node at (3.5, 0.5) {3};
|
|
|
|
\node at (4.5, 0.5) {2};
|
|
|
|
\node at (5.5, 0.5) {5};
|
|
|
|
\node at (6.5, 0.5) {2};
|
|
|
|
\node at (7.5, 0.5) {6};
|
|
|
|
\node at (8.5, 0.5) {7};
|
|
|
|
\node at (9.5, 0.5) {1};
|
|
|
|
\node at (10.5, 0.5) {7};
|
|
|
|
\node at (11.5, 0.5) {5};
|
|
|
|
\node at (12.5, 0.5) {6};
|
|
|
|
\node at (13.5, 0.5) {2};
|
|
|
|
\node at (14.5, 0.5) {3};
|
|
|
|
\node at (15.5, 0.5) {2};
|
|
|
|
|
|
|
|
\node at (2, 1.5) {21};
|
|
|
|
\node at (6, 1.5) {15};
|
|
|
|
\node at (10, 1.5) {20};
|
|
|
|
\node at (14, 1.5) {13};
|
|
|
|
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
Calculating the sum of elements in a range is
|
|
|
|
a bit more difficult.
|
|
|
|
It turns out that we can always divide
|
|
|
|
the range into three parts such that
|
|
|
|
the sum consists of values of single elements
|
|
|
|
and sums of blocks between them:
|
2016-12-28 23:54:51 +01:00
|
|
|
|
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\fill[color=lightgray] (3,0) rectangle (4,1);
|
|
|
|
\fill[color=lightgray] (12,0) rectangle (13,1);
|
|
|
|
\fill[color=lightgray] (13,0) rectangle (14,1);
|
|
|
|
\draw (0,0) grid (16,1);
|
|
|
|
|
|
|
|
\fill[color=lightgray] (4,1) rectangle (8,2);
|
|
|
|
\fill[color=lightgray] (8,1) rectangle (12,2);
|
|
|
|
\draw (0,1) rectangle (4,2);
|
|
|
|
\draw (4,1) rectangle (8,2);
|
|
|
|
\draw (8,1) rectangle (12,2);
|
|
|
|
\draw (12,1) rectangle (16,2);
|
|
|
|
|
|
|
|
\node at (0.5, 0.5) {5};
|
|
|
|
\node at (1.5, 0.5) {8};
|
|
|
|
\node at (2.5, 0.5) {6};
|
|
|
|
\node at (3.5, 0.5) {3};
|
|
|
|
\node at (4.5, 0.5) {2};
|
|
|
|
\node at (5.5, 0.5) {5};
|
|
|
|
\node at (6.5, 0.5) {2};
|
|
|
|
\node at (7.5, 0.5) {6};
|
|
|
|
\node at (8.5, 0.5) {7};
|
|
|
|
\node at (9.5, 0.5) {1};
|
|
|
|
\node at (10.5, 0.5) {7};
|
|
|
|
\node at (11.5, 0.5) {5};
|
|
|
|
\node at (12.5, 0.5) {6};
|
|
|
|
\node at (13.5, 0.5) {2};
|
|
|
|
\node at (14.5, 0.5) {3};
|
|
|
|
\node at (15.5, 0.5) {2};
|
|
|
|
|
|
|
|
\node at (2, 1.5) {21};
|
|
|
|
\node at (6, 1.5) {15};
|
|
|
|
\node at (10, 1.5) {20};
|
|
|
|
\node at (14, 1.5) {13};
|
|
|
|
|
|
|
|
\draw [decoration={brace}, decorate, line width=0.5mm] (14,-0.25) -- (3,-0.25);
|
|
|
|
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
Since the number of single elements is $O(\sqrt n)$
|
2017-04-22 10:57:37 +02:00
|
|
|
and the number of blocks is also $O(\sqrt n)$,
|
2017-02-11 20:24:28 +01:00
|
|
|
the time complexity of the sum query is $O(\sqrt n)$.
|
2017-04-22 10:57:37 +02:00
|
|
|
In this case, the parameter $\sqrt n$ balances two things:
|
2017-02-11 20:24:28 +01:00
|
|
|
the array is divided into $\sqrt n$ blocks,
|
|
|
|
each of which contains $\sqrt n$ elements.
|
2016-12-28 23:54:51 +01:00
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
In practice, it is not needed to use the
|
2017-02-18 17:46:37 +01:00
|
|
|
exact value of $\sqrt n$ as a parameter, but it may be better to
|
2017-01-25 22:13:05 +01:00
|
|
|
use parameters $k$ and $n/k$ where $k$ is
|
2017-02-18 17:46:37 +01:00
|
|
|
different from $\sqrt n$.
|
2017-02-11 20:24:28 +01:00
|
|
|
The optimal parameter depends on the problem and input.
|
|
|
|
For example, if an algorithm often goes
|
|
|
|
through the blocks but rarely inspects
|
|
|
|
single elements inside the blocks,
|
|
|
|
it may be a good idea to divide the array into
|
2017-01-26 22:12:30 +01:00
|
|
|
$k < \sqrt n$ blocks, each of which contains $n/k > \sqrt n$
|
2017-01-25 22:13:05 +01:00
|
|
|
elements.
|
|
|
|
|
2017-05-02 20:30:27 +02:00
|
|
|
\section{Combining algorithms}
|
|
|
|
|
|
|
|
In this section we discuss two square root algorithms
|
|
|
|
that are based on combining two algorithms into one algorithm.
|
|
|
|
In both cases, we could use either of the algorithms
|
|
|
|
alone and solve the problem in $O(n^2)$ time.
|
|
|
|
However, by combining the algorithms, the running
|
|
|
|
time becomes $O(n \sqrt n)$.
|
|
|
|
|
|
|
|
\subsubsection{Case processing}
|
|
|
|
|
|
|
|
Suppose that we are given a two-dimensional
|
|
|
|
grid that contains $n$ cells.
|
|
|
|
Each cell is assigned a letter,
|
|
|
|
and our task is to find two cells
|
|
|
|
with the same letter whose distance is minimum,
|
|
|
|
where the distance between cells
|
|
|
|
$(x_1,y_1)$ and $(x_2,y_2)$ is $|x_1-x_2|+|y_1-y_2|$.
|
|
|
|
For example, consider the following grid:
|
|
|
|
|
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\node at (0.5,0.5) {A};
|
|
|
|
\node at (0.5,1.5) {B};
|
|
|
|
\node at (0.5,2.5) {C};
|
|
|
|
\node at (0.5,3.5) {A};
|
|
|
|
\node at (1.5,0.5) {C};
|
|
|
|
\node at (1.5,1.5) {D};
|
|
|
|
\node at (1.5,2.5) {E};
|
|
|
|
\node at (1.5,3.5) {F};
|
|
|
|
\node at (2.5,0.5) {B};
|
|
|
|
\node at (2.5,1.5) {A};
|
|
|
|
\node at (2.5,2.5) {G};
|
|
|
|
\node at (2.5,3.5) {B};
|
|
|
|
\node at (3.5,0.5) {D};
|
|
|
|
\node at (3.5,1.5) {F};
|
|
|
|
\node at (3.5,2.5) {E};
|
|
|
|
\node at (3.5,3.5) {A};
|
|
|
|
\draw (0,0) grid (4,4);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
|
|
|
In this case, the minimum distance is 2 between the two 'E' letters.
|
|
|
|
|
|
|
|
Let us consider the problem of calculating the minimum distance
|
|
|
|
between two cells with a \emph{fixed} letter $c$.
|
|
|
|
There are two algorithms for this:
|
|
|
|
|
|
|
|
\emph{Algorithm 1:} Go through all pairs of cells with letter $c$,
|
|
|
|
and calculate the minimum distance between such cells.
|
|
|
|
This will take $O(k^2)$ time where $k$ is the number of cells with letter $c$.
|
|
|
|
|
|
|
|
\emph{Algorithm 2:} Perform a breadth-first search that simultaneously
|
|
|
|
starts at each cell with letter $c$. The minimum distance between
|
|
|
|
two cells with letter $c$ will be calculated in $O(n)$ time.
|
|
|
|
|
|
|
|
Now we can go through all letters that appear in the grid
|
|
|
|
and use either of the above algorithms.
|
|
|
|
If we always used Algorithm 1, the running time would be $O(n^2)$,
|
|
|
|
because all cells may have the same letters and $k=n$.
|
|
|
|
Also if we always used Algorithm 2, the running time would be $O(n^2)$,
|
|
|
|
because all cells may have different letters and there would
|
|
|
|
be $n$ searches.
|
|
|
|
|
|
|
|
However, we can \emph{combine} the two algorithms and
|
|
|
|
use different algorithms for different letters
|
|
|
|
depending on how many times each letter appears in the grid.
|
|
|
|
Assume that a letter $c$ appears $k$ times.
|
|
|
|
If $k \le \sqrt n$, we use Algorithm 1, and if $k > \sqrt n$,
|
|
|
|
we use Algorithm 2.
|
|
|
|
It turns out that by doing this, the total running time
|
|
|
|
of the algorithm is only $O(n \sqrt n)$.
|
|
|
|
|
|
|
|
First, suppose that we use Algorithm 1 for a letter $c$.
|
|
|
|
Since $c$ appears at most $\sqrt n$ times in the grid,
|
|
|
|
we compare each cell with letter $c$ $O(\sqrt n)$ times
|
|
|
|
with other cells.
|
|
|
|
Thus, the time used for processing all such cells is $O(n \sqrt n)$.
|
|
|
|
Then, suppose that we use Algorithm 2 for a letter $c$.
|
|
|
|
There are at most $\sqrt n$ such letters,
|
|
|
|
so processing those letters also takes $O(n \sqrt n)$ time.
|
|
|
|
|
|
|
|
\subsubsection{Batch processing}
|
|
|
|
|
|
|
|
Consider again a two-dimensional grid that contains $n$ cells.
|
|
|
|
Initially, each cell except one is white.
|
|
|
|
We perform $n-1$ operations, each of which is given a white cell.
|
|
|
|
Each operation fist calculates the minimum distance
|
|
|
|
between the white cell and any black cell, and
|
|
|
|
then paints the white cell black.
|
|
|
|
|
|
|
|
For example, consider the following operation:
|
|
|
|
|
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\fill[color=black] (1,1) rectangle (2,2);
|
|
|
|
\fill[color=black] (3,1) rectangle (4,2);
|
|
|
|
\fill[color=black] (0,3) rectangle (1,4);
|
|
|
|
\node at (2.5,3.5) {*};
|
|
|
|
\draw (0,0) grid (4,4);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
|
|
|
|
|
|
|
There are three black cells and the cell marked with *
|
|
|
|
will be painted black next.
|
|
|
|
Before painting the cell, the minimum distance
|
|
|
|
to a black cell is calculated.
|
|
|
|
In this case the minimum distance is 2
|
|
|
|
to the right cell.
|
|
|
|
|
|
|
|
There are two algorithms for solving the problem:
|
|
|
|
|
|
|
|
\emph{Algorithm 1:} After each operation, use breadth-first search
|
|
|
|
to calculate for each white cell the distance to the nearest black cell.
|
|
|
|
Each search takes $O(n)$ time, so the total running time is $O(n^2)$.
|
|
|
|
|
|
|
|
\emph{Algorithm 2:} Maintain a list of cells that have been
|
|
|
|
painted black, go through this list at each operation
|
|
|
|
and then add a new cell to the list.
|
|
|
|
The size of the list is $O(n)$, so the algorithm
|
|
|
|
takes $O(n^2)$ time.
|
|
|
|
|
|
|
|
We can combine the above algorithms by
|
|
|
|
dividing the operations into
|
|
|
|
$O(\sqrt n)$ \emph{batches}, each of which consists
|
2017-01-25 22:13:05 +01:00
|
|
|
of $O(\sqrt n)$ operations.
|
|
|
|
At the beginning of each batch,
|
2017-05-02 20:30:27 +02:00
|
|
|
we calculate for each white cell the minimum distance
|
|
|
|
to a black cell using breadth-first search.
|
|
|
|
Then, when processing a batch, we maintain a list of cells
|
2017-01-25 22:13:05 +01:00
|
|
|
that have been painted black in the current batch.
|
2017-02-11 20:24:28 +01:00
|
|
|
The list contains $O(\sqrt n)$ elements,
|
|
|
|
because there are $O(\sqrt n)$ operations in each batch.
|
2017-05-02 20:30:27 +02:00
|
|
|
Now, the distance between a white cell and the nearest black
|
|
|
|
cell is either the precalculated distance or the distance
|
|
|
|
to a cell that appears in the list.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
2017-05-02 20:30:27 +02:00
|
|
|
The resulting algorithm works in
|
|
|
|
$O(n \sqrt n)$ time.
|
2017-02-11 20:24:28 +01:00
|
|
|
First, there are $O(\sqrt n)$ breadth-first searches
|
2017-05-02 20:30:27 +02:00
|
|
|
and each search takes $O(n)$ time.
|
2017-02-11 20:24:28 +01:00
|
|
|
Second, the total number of
|
2017-04-22 10:57:37 +02:00
|
|
|
distances calculated during the algorithm
|
|
|
|
is $O(n)$, and when calculating each distance,
|
2017-02-11 20:24:28 +01:00
|
|
|
we go through a list of $O(\sqrt n)$ squares.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
2017-05-02 18:49:34 +02:00
|
|
|
\section{Integer partitions}
|
|
|
|
|
|
|
|
Some square root algorithms are based on
|
|
|
|
the following observation:
|
|
|
|
if a positive integer $n$ is represented as
|
|
|
|
a sum of positive integers,
|
|
|
|
such a sum contains only $O(\sqrt n)$ \emph{distinct} numbers.
|
|
|
|
The reason for this is that a sum with
|
|
|
|
the maximum amount of distinct numbers has to be of the form
|
|
|
|
\[1+2+3+ \cdots = n.\]
|
|
|
|
The sum of the numbers $1,2,\ldots,k$ is
|
|
|
|
\[\frac{k(k+1)}{2},\]
|
|
|
|
so the maximum amount of distinct numbers is $k = O(\sqrt n)$.
|
|
|
|
Next we will discuss two problems that can be solved
|
|
|
|
efficiently using this observation.
|
|
|
|
|
|
|
|
\subsubsection{Knapsack}
|
|
|
|
|
|
|
|
Suppose that we are given a list of integer weights
|
|
|
|
whose sum is $n$.
|
|
|
|
Our task is to find out all sums that can be formed using
|
|
|
|
a subset of the weights. For example, if the weights are
|
|
|
|
$\{1,3,3\}$, the possible sums are as follows:
|
|
|
|
|
|
|
|
\begin{itemize}[noitemsep]
|
|
|
|
\item $0$ (empty set)
|
|
|
|
\item $1$
|
|
|
|
\item $3$
|
|
|
|
\item $1+3=4$
|
|
|
|
\item $3+3=6$
|
|
|
|
\item $1+3+3=7$
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
Using the standard knapsack approach (see Chapter 7.4),
|
|
|
|
the problem can be solved as follows:
|
|
|
|
we define a function $f(k,s)$ whose value is 1
|
|
|
|
if the sum $s$ can be formed using the first $k$ weights,
|
|
|
|
and 0 otherwise.
|
|
|
|
All values of this function can be calculated
|
|
|
|
in $O(n^2)$ time using dynamic programming.
|
|
|
|
|
|
|
|
However, we can make the algorithm more efficient
|
|
|
|
by using the fact that the sum of the weights is $n$,
|
|
|
|
which means that there are at most $O(\sqrt n)$
|
|
|
|
distinct weights.
|
|
|
|
Thus, we can process the weights in groups
|
|
|
|
such that all weights in each group are equal.
|
|
|
|
It turns out that we can process each group
|
|
|
|
in $O(n)$ time, which yields an $O(n \sqrt n)$ time algorithm.
|
|
|
|
|
|
|
|
The idea is to use an array that records the sums of weights
|
|
|
|
that can be formed using the groups processed so far.
|
|
|
|
The array contains $n$ elements: element $k$ is 1 if the sum
|
|
|
|
$k$ can be formed and 0 otherwise.
|
|
|
|
To process a group of weights, we can easily scan the array
|
|
|
|
from left to right and record the new sums of weights that
|
|
|
|
can be formed using this group and the previous groups.
|
|
|
|
|
|
|
|
\subsubsection{String construction}
|
|
|
|
|
|
|
|
Given a string and a dictionary of words,
|
|
|
|
consider the problem of counting the number of ways
|
|
|
|
the string can be constructed using the dictionary words.
|
|
|
|
For example,
|
|
|
|
if the string is \texttt{ABAB} and the dictionary is
|
|
|
|
$\{\texttt{A},\texttt{B},\texttt{AB}\}$,
|
|
|
|
there are 4 ways:
|
|
|
|
$\texttt{A}+\texttt{B}+\texttt{A}+\texttt{B}$,
|
|
|
|
$\texttt{AB}+\texttt{A}+\texttt{B}$,
|
|
|
|
$\texttt{A}+\texttt{B}+\texttt{AB}$ and
|
|
|
|
$\texttt{AB}+\texttt{AB}$.
|
|
|
|
|
|
|
|
Assume that the length of the string is $n$
|
|
|
|
and the total length of the dictionary words is $m$.
|
|
|
|
A natural way to solve the problem is to use dynamic
|
|
|
|
programming: we can define a function $f$ such that
|
|
|
|
$f(k)$ denotes the number of ways to construct a prefix
|
|
|
|
of length $k$ of the string using the dictionary words.
|
|
|
|
Using this function, $f(n)$ gives the answer to the problem.
|
|
|
|
|
|
|
|
There are several ways to calculate the values of $f$.
|
|
|
|
One method is to store the dictionary words
|
|
|
|
in a trie and go through all ways to select the
|
|
|
|
last word in each prefix, which results in an $O(n^2)$ time algorithm.
|
|
|
|
However, instead of using a trie, we can also use string hashing
|
|
|
|
and always go through the dictionary words and compare their
|
|
|
|
hash values.
|
|
|
|
|
|
|
|
The most straightforward implementation of this idea
|
|
|
|
yields an $O(nm)$ time algorithm,
|
|
|
|
because the dictionary may contain $m$ words.
|
|
|
|
However, we can make the algorithm more efficient
|
|
|
|
by considering the dictionary words grouped by their lengths.
|
|
|
|
Each group can be processed in constant time,
|
|
|
|
because all hash values of dictionary words may be stored in a set.
|
|
|
|
Since the total length of the words is $m$,
|
|
|
|
there are at most $O(\sqrt m)$ distinct word lengths
|
|
|
|
and at most $O(\sqrt m)$ groups.
|
|
|
|
Thus, the running time of the algorithm is only $O(n \sqrt m)$.
|
|
|
|
|
2017-01-25 22:13:05 +01:00
|
|
|
\section{Mo's algorithm}
|
|
|
|
|
|
|
|
\index{Mo's algorithm}
|
|
|
|
|
2017-02-22 20:53:11 +01:00
|
|
|
\key{Mo's algorithm}\footnote{According to \cite{cod15}, this algorithm
|
|
|
|
is named after Mo Tao, a Chinese competitive programmer, but
|
2017-04-22 11:00:43 +02:00
|
|
|
the technique has appeared earlier in the literature \cite{ken06}.}
|
|
|
|
can be used in many problems
|
2017-01-26 22:12:30 +01:00
|
|
|
that require processing range queries in
|
2017-01-25 22:13:05 +01:00
|
|
|
a \emph{static} array.
|
2017-04-22 10:57:37 +02:00
|
|
|
Since the array is static, the queries can be
|
|
|
|
processed in any order.
|
2017-01-26 22:12:30 +01:00
|
|
|
Before processing the queries, the algorithm
|
|
|
|
sorts them in a special order which guarantees
|
2017-02-11 20:24:28 +01:00
|
|
|
that the algorithm works efficiently.
|
2017-01-26 22:12:30 +01:00
|
|
|
|
|
|
|
At each moment in the algorithm, there is an active
|
2017-02-11 20:24:28 +01:00
|
|
|
range and the algorithm maintains the answer
|
|
|
|
to a query related to that range.
|
|
|
|
The algorithm processes the queries one by one,
|
2017-02-18 17:46:37 +01:00
|
|
|
and always moves the endpoints of the
|
2017-02-11 20:24:28 +01:00
|
|
|
active range by inserting and removing elements.
|
2017-01-25 22:13:05 +01:00
|
|
|
The time complexity of the algorithm is
|
2017-02-18 17:46:37 +01:00
|
|
|
$O(n \sqrt n f(n))$ when the array contains
|
|
|
|
$n$ elements, there are $n$ queries
|
2017-01-26 22:12:30 +01:00
|
|
|
and each insertion and removal of an element
|
|
|
|
takes $O(f(n))$ time.
|
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
The trick in Mo's algorithm is the order
|
|
|
|
in which the queries are processed:
|
|
|
|
The array is divided into blocks of $O(\sqrt n)$
|
2017-01-26 22:12:30 +01:00
|
|
|
elements, and the queries are sorted primarily by
|
2017-02-11 20:24:28 +01:00
|
|
|
the number of the block that contains the first element
|
|
|
|
in the range, and secondarily by the position of the
|
|
|
|
last element in the range.
|
2017-01-25 22:13:05 +01:00
|
|
|
It turns out that using this order, the algorithm
|
2017-01-26 22:12:30 +01:00
|
|
|
only performs $O(n \sqrt n)$ operations,
|
2017-04-22 10:57:37 +02:00
|
|
|
because the left endpoint moves
|
2017-01-26 22:12:30 +01:00
|
|
|
$n$ times $O(\sqrt n)$ steps,
|
2017-04-22 10:57:37 +02:00
|
|
|
and the right endpoint moves
|
2017-02-11 20:24:28 +01:00
|
|
|
$\sqrt n$ times $O(n)$ steps. Thus, both
|
|
|
|
endpoints move a total of $O(n \sqrt n)$ steps during the algorithm.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
|
|
|
\subsubsection*{Example}
|
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
As an example, consider a problem
|
|
|
|
where we are given a set of queries,
|
|
|
|
each of them corresponding to a range in an array,
|
|
|
|
and our task is to calculate for each query
|
2017-02-18 17:46:37 +01:00
|
|
|
the number of \emph{distinct} elements in the range.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
|
|
|
In Mo's algorithm, the queries are always sorted
|
2017-02-11 20:24:28 +01:00
|
|
|
in the same way, but it depends on the problem
|
|
|
|
how the answer to the query is maintained.
|
2017-01-25 22:13:05 +01:00
|
|
|
In this problem, we can maintain an array
|
2017-04-22 10:57:37 +02:00
|
|
|
\texttt{count} where $\texttt{count}[x]$
|
2017-02-18 17:46:37 +01:00
|
|
|
indicates the number of times an element $x$
|
2017-02-11 20:24:28 +01:00
|
|
|
occurs in the active range.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
2017-02-11 20:24:28 +01:00
|
|
|
When we move from one query to another query,
|
|
|
|
the active range changes.
|
|
|
|
For example, if the current range is
|
2016-12-28 23:54:51 +01:00
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\fill[color=lightgray] (1,0) rectangle (5,1);
|
|
|
|
\draw (0,0) grid (9,1);
|
|
|
|
\node at (0.5, 0.5) {4};
|
|
|
|
\node at (1.5, 0.5) {2};
|
|
|
|
\node at (2.5, 0.5) {5};
|
|
|
|
\node at (3.5, 0.5) {4};
|
|
|
|
\node at (4.5, 0.5) {2};
|
|
|
|
\node at (5.5, 0.5) {4};
|
|
|
|
\node at (6.5, 0.5) {3};
|
|
|
|
\node at (7.5, 0.5) {3};
|
|
|
|
\node at (8.5, 0.5) {4};
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
2017-02-11 20:24:28 +01:00
|
|
|
and the next range is
|
2016-12-28 23:54:51 +01:00
|
|
|
\begin{center}
|
|
|
|
\begin{tikzpicture}[scale=0.7]
|
|
|
|
\fill[color=lightgray] (2,0) rectangle (7,1);
|
|
|
|
\draw (0,0) grid (9,1);
|
|
|
|
\node at (0.5, 0.5) {4};
|
|
|
|
\node at (1.5, 0.5) {2};
|
|
|
|
\node at (2.5, 0.5) {5};
|
|
|
|
\node at (3.5, 0.5) {4};
|
|
|
|
\node at (4.5, 0.5) {2};
|
|
|
|
\node at (5.5, 0.5) {4};
|
|
|
|
\node at (6.5, 0.5) {3};
|
|
|
|
\node at (7.5, 0.5) {3};
|
|
|
|
\node at (8.5, 0.5) {4};
|
|
|
|
\end{tikzpicture}
|
|
|
|
\end{center}
|
2017-01-25 22:13:05 +01:00
|
|
|
there will be three steps:
|
2017-04-17 11:18:29 +02:00
|
|
|
the left endpoint moves one step to the right,
|
2017-02-11 20:24:28 +01:00
|
|
|
and the right endpoint moves two steps to the right.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
2017-04-22 10:57:37 +02:00
|
|
|
After each step, the array \texttt{count}
|
2017-02-11 20:38:13 +01:00
|
|
|
needs to be updated.
|
2017-02-11 20:31:56 +01:00
|
|
|
After adding an element $x$,
|
|
|
|
we increase the value of
|
2017-04-22 10:57:37 +02:00
|
|
|
$\texttt{count}[x]$ by 1,
|
|
|
|
and if $\texttt{count}[x]=1$ after this,
|
|
|
|
we also increase the answer to the query by 1.
|
2017-02-11 20:38:13 +01:00
|
|
|
Similarly, after removing an element $x$,
|
2017-02-11 20:31:56 +01:00
|
|
|
we decrease the value of
|
2017-04-22 10:57:37 +02:00
|
|
|
$\texttt{count}[x]$ by 1,
|
|
|
|
and if $\texttt{count}[x]=0$ after this,
|
|
|
|
we also decrease the answer to the query by 1.
|
2017-01-25 22:13:05 +01:00
|
|
|
|
|
|
|
In this problem, the time needed to perform
|
|
|
|
each step is $O(1)$, so the total time complexity
|
2017-01-26 22:12:30 +01:00
|
|
|
of the algorithm is $O(n \sqrt n)$.
|