References
This commit is contained in:
parent
797035fd8a
commit
07816edf67
|
@ -0,0 +1,183 @@
|
||||||
|
\begin{thebibliography}{9}
|
||||||
|
|
||||||
|
\bibitem{aho83}
|
||||||
|
A. V. Aho, J. E. Hopcroft and J. Ullman.
|
||||||
|
\emph{Data Structures and Algorithms},
|
||||||
|
Addison-Wesley, 1983.
|
||||||
|
|
||||||
|
\bibitem{asp79}
|
||||||
|
B. Aspvall, M. F. Plass and R. E. Tarjan.
|
||||||
|
A linear-time algorithm for testing the truth of certain quantified boolean formulas.
|
||||||
|
\emph{Information Processing Letters}, 8(3):121--123, 1979.
|
||||||
|
|
||||||
|
\bibitem{bel58}
|
||||||
|
R. Bellman.
|
||||||
|
On a routing problem.
|
||||||
|
\emph{Quarterly of Applied Mathematics}, 16(1):87--90, 1958.
|
||||||
|
|
||||||
|
\bibitem{ben86}
|
||||||
|
J. Bentley.
|
||||||
|
\emph{Programming Pearls}.
|
||||||
|
Addison-Wesley, 1986.
|
||||||
|
|
||||||
|
\bibitem{cod15}
|
||||||
|
Codeforces: On ''Mo's algorithm'',
|
||||||
|
\url{http://codeforces.com/blog/entry/20032}
|
||||||
|
|
||||||
|
\bibitem{dij59}
|
||||||
|
E. W. Dijkstra.
|
||||||
|
A note on two problems in connexion with graphs.
|
||||||
|
\emph{Numerische Mathematik}, 1(1):269--271, 1959.
|
||||||
|
|
||||||
|
\bibitem{edm65}
|
||||||
|
J. Edmonds.
|
||||||
|
Paths, trees, and flowers.
|
||||||
|
\emph{Canadian Journal of Mathematics}, 17(3):449--467, 1965.
|
||||||
|
|
||||||
|
\bibitem{edm72}
|
||||||
|
J. Edmonds and R. M. Karp.
|
||||||
|
Theoretical improvements in algorithmic efficiency for network flow problems.
|
||||||
|
\emph{Journal of the ACM}, 19(2):248--264, 1972.
|
||||||
|
|
||||||
|
\bibitem{fan94}
|
||||||
|
D. Fanding.
|
||||||
|
A faster algorithm for shortest-path -- SPFA.
|
||||||
|
\emph{Journal of Southwest Jiaotong University}, 2, 1994.
|
||||||
|
|
||||||
|
\bibitem{fen94}
|
||||||
|
P. M. Fenwick.
|
||||||
|
A new data structure for cumulative frequency tables.
|
||||||
|
\emph{Software: Practice and Experience}, 24(3):327--336, 1994.
|
||||||
|
|
||||||
|
\bibitem{fis11}
|
||||||
|
J. Fischer and V. Heun.
|
||||||
|
Space-efficient preprocessing schemes for range minimum queries on static arrays.
|
||||||
|
\emph{SIAM Journal on Computing}, 40(2):465--492, 2011.
|
||||||
|
|
||||||
|
\bibitem{flo62}
|
||||||
|
R. W. Floyd
|
||||||
|
Algorithm 97: shortest path.
|
||||||
|
\emph{Communications of the ACM}, 5(6):345, 1962.
|
||||||
|
|
||||||
|
\bibitem{for56}
|
||||||
|
L. R. Ford and D. R. Fulkerson.
|
||||||
|
Maximal flow through a network.
|
||||||
|
\emph{Canadian Journal of Mathematics}, 8(3):399--404, 1956.
|
||||||
|
|
||||||
|
\bibitem{gal14}
|
||||||
|
F. Le Gall.
|
||||||
|
Powers of tensors and fast matrix multiplication.
|
||||||
|
In \emph{Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation},
|
||||||
|
296--303.
|
||||||
|
|
||||||
|
\bibitem{gar79}
|
||||||
|
M. R. Garey and D. S. Johnson.
|
||||||
|
\emph{Computers and Intractability:
|
||||||
|
A Guide to the Theory of NP-Completeness},
|
||||||
|
W. H. Freeman and Company, 1979.
|
||||||
|
|
||||||
|
\bibitem{goo16}
|
||||||
|
Google Code Jam Statistics (2016),
|
||||||
|
\url{https://www.go-hero.net/jam/16}
|
||||||
|
|
||||||
|
\bibitem{gro14}
|
||||||
|
A. Grønlund and S. Pettie.
|
||||||
|
Threesomes, degenerates, and love triangles.
|
||||||
|
\emph{2014 IEEE 55th Annual Symposium on Foundations of Computer Science},
|
||||||
|
621--630, 2014.
|
||||||
|
|
||||||
|
\bibitem{gus97}
|
||||||
|
D. Gusfield.
|
||||||
|
\emph{Algorithms on Strings, Trees and Sequences:
|
||||||
|
Computer Science and Computational Biology},
|
||||||
|
Cambridge University Press, 1997.
|
||||||
|
|
||||||
|
\bibitem{huf52}
|
||||||
|
A method for the construction of minimum-redundancy codes.
|
||||||
|
\emph{Proceedings of the IRE}, 40(9):1098--1101, 1952.
|
||||||
|
|
||||||
|
\bibitem{kas61}
|
||||||
|
P. W. Kasteleyn.
|
||||||
|
The statistics of dimers on a lattice: I. The number of dimer arrangements on a quadratic lattice.
|
||||||
|
\emph{Physica}, 27(12):1209--1225, 1961.
|
||||||
|
|
||||||
|
\bibitem{ken06}
|
||||||
|
C. Kent, G. m. Landau and M. Ziv-Ukelson.
|
||||||
|
On the complexity of sparse exon assembly.
|
||||||
|
\emph{Journal of Computational Biology}, 13(5):1013--1027, 2006.
|
||||||
|
|
||||||
|
\bibitem{kru56}
|
||||||
|
J. B. Kruskal.
|
||||||
|
On the shortest spanning subtree of a graph and the traveling salesman problem.
|
||||||
|
\emph{Proceedings of the American Mathematical Society}, 7(1):48--50, 1956.
|
||||||
|
|
||||||
|
\bibitem{pac13}
|
||||||
|
J. Pachocki and J. Radoszweski.
|
||||||
|
Where to use and how not to use polynomial string hashing.
|
||||||
|
\emph{Olympiads in Informatics}, 2013.
|
||||||
|
|
||||||
|
\bibitem{pri57}
|
||||||
|
R. C. Prim.
|
||||||
|
Shortest connection networks and some generalizations.
|
||||||
|
\emph{Bell System Technical Journal}, 36(6):1389--1401, 1957.
|
||||||
|
|
||||||
|
\bibitem{sha81}
|
||||||
|
M. Sharir.
|
||||||
|
A strong-connectivity algorithm and its applications in data flow analysis.
|
||||||
|
\emph{Computers \& Mathematics with Applications}, 7(1):67--72, 1981.
|
||||||
|
|
||||||
|
\bibitem{str69}
|
||||||
|
V. Strassen.
|
||||||
|
Gaussian elimination is not optimal.
|
||||||
|
\emph{Numerische Mathematik}, 13(4):354--356, 1969.
|
||||||
|
|
||||||
|
\bibitem{tem61}
|
||||||
|
H. N. V. Temperley and M. E. Fisher.
|
||||||
|
Dimer problem in statistical mechanics -- an exact result.
|
||||||
|
\emph{Philosophical Magazine}, 6(68):1061--1063, 1961.
|
||||||
|
|
||||||
|
\end{thebibliography}
|
||||||
|
%
|
||||||
|
%
|
||||||
|
% \chapter*{Literature}
|
||||||
|
% \markboth{\MakeUppercase{Literature}}{}
|
||||||
|
% \addcontentsline{toc}{chapter}{Literature}
|
||||||
|
%
|
||||||
|
% \subsubsection{Textbooks on algorithms}
|
||||||
|
%
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item T. H. Cormen, C. E. Leiserson,
|
||||||
|
% R. L Rivest and C. Stein:
|
||||||
|
% \emph{Introduction to Algorithms},
|
||||||
|
% MIT Press, 2009 (3rd edition)
|
||||||
|
% \item J. Kleinberg and É. Tardos:
|
||||||
|
% \emph{Algorithm Design},
|
||||||
|
% Pearson, 2005
|
||||||
|
% \item S. S. Skiena:
|
||||||
|
% \emph{The Algorithm Design Manual},
|
||||||
|
% Springer, 2008 (2nd edition)
|
||||||
|
% \end{itemize}
|
||||||
|
%
|
||||||
|
% \subsubsection{Textbooks on competitive programming}
|
||||||
|
%
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item K. Diks, T. Idziaszek,
|
||||||
|
% J. Łącki and J. Radoszewski:
|
||||||
|
% \emph{Looking for a Challenge?},
|
||||||
|
% University of Warsaw, 2012
|
||||||
|
% \item S. Halim and F. Halim:
|
||||||
|
% \emph{Competitive Programming},
|
||||||
|
% 2013 (3rd edition)
|
||||||
|
% \item S. S. Skiena and M. A. Revilla:
|
||||||
|
% \emph{Programming Challenges: The Programming
|
||||||
|
% Contest Training Manual},
|
||||||
|
% Springer, 2003
|
||||||
|
% \end{itemize}
|
||||||
|
%
|
||||||
|
% \subsubsection{Other books}
|
||||||
|
%
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item J. Bentley:
|
||||||
|
% \emph{Programming Pearls},
|
||||||
|
% Addison-Wesley, 1999 (2nd edition)
|
||||||
|
% \end{itemize}
|
6
kkkk.tex
6
kkkk.tex
|
@ -1,6 +1,6 @@
|
||||||
\documentclass[twoside,12pt,a4paper,english]{book}
|
\documentclass[twoside,12pt,a4paper,english]{book}
|
||||||
|
|
||||||
%\includeonly{johdanto}
|
\includeonly{luku27,kirj}
|
||||||
|
|
||||||
\usepackage[english]{babel}
|
\usepackage[english]{babel}
|
||||||
\usepackage[utf8]{inputenc}
|
\usepackage[utf8]{inputenc}
|
||||||
|
@ -105,9 +105,9 @@
|
||||||
\include{luku28}
|
\include{luku28}
|
||||||
\include{luku29}
|
\include{luku29}
|
||||||
\include{luku30}
|
\include{luku30}
|
||||||
%\include{kirj}
|
\include{kirj}
|
||||||
|
|
||||||
\cleardoublepage
|
\cleardoublepage
|
||||||
\printindex
|
\printindex
|
||||||
|
|
||||||
\end{document}
|
\end{document}la
|
|
@ -49,7 +49,7 @@ For example, in Google Code Jam 2016,
|
||||||
among the best 3,000 participants,
|
among the best 3,000 participants,
|
||||||
73 \% used C++,
|
73 \% used C++,
|
||||||
15 \% used Python and
|
15 \% used Python and
|
||||||
10 \% used Java. %\footnote{\url{https://www.go-hero.net/jam/16}}
|
10 \% used Java \cite{goo16}.
|
||||||
Some participants also used several languages.
|
Some participants also used several languages.
|
||||||
|
|
||||||
Many people think that C++ is the best choice
|
Many people think that C++ is the best choice
|
||||||
|
|
|
@ -281,7 +281,7 @@ Still, there are many important problems for which
|
||||||
no polynomial algorithm is known, i.e.,
|
no polynomial algorithm is known, i.e.,
|
||||||
nobody knows how to solve them efficiently.
|
nobody knows how to solve them efficiently.
|
||||||
\key{NP-hard} problems are an important set
|
\key{NP-hard} problems are an important set
|
||||||
of problems for which no polynomial algorithm is known.
|
of problems for which no polynomial algorithm is known \cite{gar79}.
|
||||||
|
|
||||||
\section{Estimating efficiency}
|
\section{Estimating efficiency}
|
||||||
|
|
||||||
|
@ -355,7 +355,8 @@ Given an array of $n$ integers $x_1,x_2,\ldots,x_n$,
|
||||||
our task is to find the
|
our task is to find the
|
||||||
\key{maximum subarray sum}, i.e.,
|
\key{maximum subarray sum}, i.e.,
|
||||||
the largest possible sum of numbers
|
the largest possible sum of numbers
|
||||||
in a contiguous region in the array.
|
in a contiguous region in the array\footnote{Jon Bentley's
|
||||||
|
book \emph{Programming Pearls} \cite{ben86} made this problem popular.}.
|
||||||
The problem is interesting when there may be
|
The problem is interesting when there may be
|
||||||
negative numbers in the array.
|
negative numbers in the array.
|
||||||
For example, in the array
|
For example, in the array
|
||||||
|
|
|
@ -530,7 +530,7 @@ the string \texttt{AB} or the string \texttt{C}.
|
||||||
|
|
||||||
\subsubsection{Huffman coding}
|
\subsubsection{Huffman coding}
|
||||||
|
|
||||||
\key{Huffman coding} is a greedy algorithm
|
\key{Huffman coding} \cite{huf52} is a greedy algorithm
|
||||||
that constructs an optimal code for
|
that constructs an optimal code for
|
||||||
compressing a given string.
|
compressing a given string.
|
||||||
The algorithm builds a binary tree
|
The algorithm builds a binary tree
|
||||||
|
|
|
@ -983,7 +983,7 @@ $2^m$ distinct rows and the time complexity is
|
||||||
$O(n 2^{2m})$.
|
$O(n 2^{2m})$.
|
||||||
|
|
||||||
As a final note, there is also a surprising direct formula
|
As a final note, there is also a surprising direct formula
|
||||||
for calculating the number of tilings:
|
for calculating the number of tilings \cite{kas61,tem61}:
|
||||||
\[ \prod_{a=1}^{\lceil n/2 \rceil} \prod_{b=1}^{\lceil m/2 \rceil} 4 \cdot (\cos^2 \frac{\pi a}{n + 1} + \cos^2 \frac{\pi b}{m+1})\]
|
\[ \prod_{a=1}^{\lceil n/2 \rceil} \prod_{b=1}^{\lceil m/2 \rceil} 4 \cdot (\cos^2 \frac{\pi a}{n + 1} + \cos^2 \frac{\pi b}{m+1})\]
|
||||||
This formula is very efficient, because it calculates
|
This formula is very efficient, because it calculates
|
||||||
the number of tilings in $O(nm)$ time,
|
the number of tilings in $O(nm)$ time,
|
||||||
|
|
|
@ -419,7 +419,13 @@ A more difficult problem is
|
||||||
the \key{3SUM problem} that asks to
|
the \key{3SUM problem} that asks to
|
||||||
find \emph{three} numbers in the array
|
find \emph{three} numbers in the array
|
||||||
such that their sum is $x$.
|
such that their sum is $x$.
|
||||||
This problem can be solved in $O(n^2)$ time.
|
Using the idea of the above algorithm,
|
||||||
|
this problem can be solved in $O(n^2)$ time\footnote{For a long time,
|
||||||
|
it was thought that solving
|
||||||
|
the 3SUM problem more efficiently than in $O(n^2)$ time
|
||||||
|
would not be possible.
|
||||||
|
However, in 2014, it turned out \cite{gro14}
|
||||||
|
that this is not the case.}.
|
||||||
Can you see how?
|
Can you see how?
|
||||||
|
|
||||||
\section{Nearest smaller elements}
|
\section{Nearest smaller elements}
|
||||||
|
|
11
luku09.tex
11
luku09.tex
|
@ -242,9 +242,12 @@ to the position of $X$.
|
||||||
|
|
||||||
\subsubsection{Minimum queries}
|
\subsubsection{Minimum queries}
|
||||||
|
|
||||||
It is also possible to process minimum queries
|
Next we will see how we can
|
||||||
in $O(1)$ time, though it is more difficult than
|
process range minimum queries in $O(1)$ time
|
||||||
to process sum queries.
|
after an $O(n \log n)$ time preprocessing\footnote{There are also
|
||||||
|
sophisticated techniques \cite{fis11} where the preprocessing time
|
||||||
|
is only $O(n)$, but such algorithms are not needed in
|
||||||
|
competitive programming.}.
|
||||||
Note that minimum and maximum queries can always
|
Note that minimum and maximum queries can always
|
||||||
be processed using similar techniques,
|
be processed using similar techniques,
|
||||||
so it suffices to focus on minimum queries.
|
so it suffices to focus on minimum queries.
|
||||||
|
@ -437,7 +440,7 @@ we can conclude that $\textrm{rmq}(2,7)=1$.
|
||||||
\index{binary indexed tree}
|
\index{binary indexed tree}
|
||||||
\index{Fenwick tree}
|
\index{Fenwick tree}
|
||||||
|
|
||||||
A \key{binary indexed tree} or \key{Fenwick tree}
|
A \key{binary indexed tree} or \key{Fenwick tree} \cite{fen94}
|
||||||
can be seen as a dynamic variant of a sum array.
|
can be seen as a dynamic variant of a sum array.
|
||||||
This data structure supports two $O(\log n)$ time operations:
|
This data structure supports two $O(\log n)$ time operations:
|
||||||
calculating the sum of elements in a range
|
calculating the sum of elements in a range
|
||||||
|
|
|
@ -546,4 +546,4 @@ it is difficult to find out if the nodes
|
||||||
in a graph can be colored using $k$ colors
|
in a graph can be colored using $k$ colors
|
||||||
so that no adjacent nodes have the same color.
|
so that no adjacent nodes have the same color.
|
||||||
Even when $k=3$, no efficient algorithm is known
|
Even when $k=3$, no efficient algorithm is known
|
||||||
but the problem is NP-hard.
|
but the problem is NP-hard \cite{gar79}.
|
|
@ -24,7 +24,7 @@ for finding shortest paths.
|
||||||
|
|
||||||
\index{Bellman–Ford algorithm}
|
\index{Bellman–Ford algorithm}
|
||||||
|
|
||||||
The \key{Bellman–Ford algorithm} finds the
|
The \key{Bellman–Ford algorithm} \cite{bel58} finds the
|
||||||
shortest paths from a starting node to all
|
shortest paths from a starting node to all
|
||||||
other nodes in the graph.
|
other nodes in the graph.
|
||||||
The algorithm can process all kinds of graphs,
|
The algorithm can process all kinds of graphs,
|
||||||
|
@ -280,7 +280,7 @@ regardless of the starting node.
|
||||||
|
|
||||||
\index{SPFA algorithm}
|
\index{SPFA algorithm}
|
||||||
|
|
||||||
The \key{SPFA algorithm} (''Shortest Path Faster Algorithm'')
|
The \key{SPFA algorithm} (''Shortest Path Faster Algorithm'') \cite{fan94}
|
||||||
is a variant of the Bellman–Ford algorithm,
|
is a variant of the Bellman–Ford algorithm,
|
||||||
that is often more efficient than the original algorithm.
|
that is often more efficient than the original algorithm.
|
||||||
The SPFA algorithm does not go through all the edges on each round,
|
The SPFA algorithm does not go through all the edges on each round,
|
||||||
|
@ -331,7 +331,7 @@ original Bellman–Ford algorithm.
|
||||||
|
|
||||||
\index{Dijkstra's algorithm}
|
\index{Dijkstra's algorithm}
|
||||||
|
|
||||||
\key{Dijkstra's algorithm} finds the shortest
|
\key{Dijkstra's algorithm} \cite{dij59} finds the shortest
|
||||||
paths from the starting node to all other nodes,
|
paths from the starting node to all other nodes,
|
||||||
like the Bellman–Ford algorithm.
|
like the Bellman–Ford algorithm.
|
||||||
The benefit in Dijsktra's algorithm is that
|
The benefit in Dijsktra's algorithm is that
|
||||||
|
@ -594,7 +594,7 @@ at most one distance to the priority queue.
|
||||||
|
|
||||||
\index{Floyd–Warshall algorithm}
|
\index{Floyd–Warshall algorithm}
|
||||||
|
|
||||||
The \key{Floyd–Warshall algorithm}
|
The \key{Floyd–Warshall algorithm} \cite{flo62}
|
||||||
is an alternative way to approach the problem
|
is an alternative way to approach the problem
|
||||||
of finding shortest paths.
|
of finding shortest paths.
|
||||||
Unlike the other algorihms in this chapter,
|
Unlike the other algorihms in this chapter,
|
||||||
|
|
|
@ -123,7 +123,7 @@ maximum spanning trees by processing the edges in reverse order.
|
||||||
|
|
||||||
\index{Kruskal's algorithm}
|
\index{Kruskal's algorithm}
|
||||||
|
|
||||||
In \key{Kruskal's algorithm}, the initial spanning tree
|
In \key{Kruskal's algorithm} \cite{kru56}, the initial spanning tree
|
||||||
only contains the nodes of the graph
|
only contains the nodes of the graph
|
||||||
and does not contain any edges.
|
and does not contain any edges.
|
||||||
Then the algorithm goes through the edges
|
Then the algorithm goes through the edges
|
||||||
|
@ -567,7 +567,7 @@ the smaller set to the larger set.
|
||||||
|
|
||||||
\index{Prim's algorithm}
|
\index{Prim's algorithm}
|
||||||
|
|
||||||
\key{Prim's algorithm} is an alternative method
|
\key{Prim's algorithm} \cite{pri57} is an alternative method
|
||||||
for finding a minimum spanning tree.
|
for finding a minimum spanning tree.
|
||||||
The algorithm first adds an arbitrary node
|
The algorithm first adds an arbitrary node
|
||||||
to the tree.
|
to the tree.
|
||||||
|
|
|
@ -135,7 +135,10 @@ presented in Chapter 16.
|
||||||
|
|
||||||
\index{Kosaraju's algorithm}
|
\index{Kosaraju's algorithm}
|
||||||
|
|
||||||
\key{Kosaraju's algorithm} is an efficient
|
\key{Kosaraju's algorithm}\footnote{According to \cite{aho83},
|
||||||
|
S. R. Kosaraju invented this algorithm in 1978
|
||||||
|
but did not publish it. In 1981, the same algorithm was rediscovered
|
||||||
|
and published by M. Sharir \cite{sha81}.} is an efficient
|
||||||
method for finding the strongly connected components
|
method for finding the strongly connected components
|
||||||
of a directed graph.
|
of a directed graph.
|
||||||
The algorithm performs two depth-first searches:
|
The algorithm performs two depth-first searches:
|
||||||
|
@ -365,7 +368,7 @@ performs two depth-first searches.
|
||||||
\index{2SAT problem}
|
\index{2SAT problem}
|
||||||
|
|
||||||
Strongly connectivity is also linked with the
|
Strongly connectivity is also linked with the
|
||||||
\key{2SAT problem}.
|
\key{2SAT problem} \cite{asp79}.
|
||||||
In this problem, we are given a logical formula
|
In this problem, we are given a logical formula
|
||||||
\[
|
\[
|
||||||
(a_1 \lor b_1) \land (a_2 \lor b_2) \land \cdots \land (a_m \lor b_m),
|
(a_1 \lor b_1) \land (a_2 \lor b_2) \land \cdots \land (a_m \lor b_m),
|
||||||
|
@ -556,7 +559,7 @@ and both $x_i$ and $x_j$ become false.
|
||||||
A more difficult problem is the \key{3SAT problem}
|
A more difficult problem is the \key{3SAT problem}
|
||||||
where each part of the formula is of the form
|
where each part of the formula is of the form
|
||||||
$(a_i \lor b_i \lor c_i)$.
|
$(a_i \lor b_i \lor c_i)$.
|
||||||
This problem is NP-hard, so no efficient algorithm
|
This problem is NP-hard \cite{gar79}, so no efficient algorithm
|
||||||
for solving the problem is known.
|
for solving the problem is known.
|
||||||
|
|
||||||
|
|
||||||
|
|
10
luku20.tex
10
luku20.tex
|
@ -89,7 +89,7 @@ It is easy see that this flow is maximum,
|
||||||
because the total capacity of the edges
|
because the total capacity of the edges
|
||||||
leading to the sink is $7$.
|
leading to the sink is $7$.
|
||||||
|
|
||||||
\subsubsection{Minimum cuts}
|
\subsubsection{Minimum cut}
|
||||||
|
|
||||||
\index{cut}
|
\index{cut}
|
||||||
\index{minimum cut}
|
\index{minimum cut}
|
||||||
|
@ -157,7 +157,7 @@ The algorithm also helps us to understand
|
||||||
|
|
||||||
\index{Ford–Fulkerson algorithm}
|
\index{Ford–Fulkerson algorithm}
|
||||||
|
|
||||||
The \key{Ford–Fulkerson algorithm} finds
|
The \key{Ford–Fulkerson algorithm} \cite{for56} finds
|
||||||
the maximum flow in a graph.
|
the maximum flow in a graph.
|
||||||
The algorithm begins with an empty flow,
|
The algorithm begins with an empty flow,
|
||||||
and at each step finds a path in the graph
|
and at each step finds a path in the graph
|
||||||
|
@ -430,7 +430,7 @@ by using one of the following techniques:
|
||||||
|
|
||||||
\index{Edmonds–Karp algorithm}
|
\index{Edmonds–Karp algorithm}
|
||||||
|
|
||||||
The \key{Edmonds–Karp algorithm}
|
The \key{Edmonds–Karp algorithm} \cite{edm72}
|
||||||
is a variant of the
|
is a variant of the
|
||||||
Ford–Fulkerson algorithm that
|
Ford–Fulkerson algorithm that
|
||||||
chooses each path so that the number of edges
|
chooses each path so that the number of edges
|
||||||
|
@ -459,7 +459,7 @@ because depth-first search can be used for finding paths.
|
||||||
Both algorithms are efficient enough for problems
|
Both algorithms are efficient enough for problems
|
||||||
that typically appear in programming contests.
|
that typically appear in programming contests.
|
||||||
|
|
||||||
\subsubsection{Minimum cut}
|
\subsubsection{Minimum cuts}
|
||||||
|
|
||||||
\index{minimum cut}
|
\index{minimum cut}
|
||||||
|
|
||||||
|
@ -779,7 +779,7 @@ such that each pair is connected with an edge and
|
||||||
each node belongs to at most one pair.
|
each node belongs to at most one pair.
|
||||||
|
|
||||||
There are polynomial algorithms for finding
|
There are polynomial algorithms for finding
|
||||||
maximum matchings in general graphs,
|
maximum matchings in general graphs \cite{edm65},
|
||||||
but such algorithms are complex and do
|
but such algorithms are complex and do
|
||||||
not appear in programming contests.
|
not appear in programming contests.
|
||||||
However, in bipartite graphs,
|
However, in bipartite graphs,
|
||||||
|
|
12
luku23.tex
12
luku23.tex
|
@ -244,12 +244,16 @@ we can calculate the product of
|
||||||
two $n \times n$ matrices
|
two $n \times n$ matrices
|
||||||
in $O(n^3)$ time.
|
in $O(n^3)$ time.
|
||||||
There are also more efficient algorithms
|
There are also more efficient algorithms
|
||||||
for matrix multiplication:
|
for matrix multiplication\footnote{The first such
|
||||||
at the moment, the best known time complexity
|
algorithm, with time complexity $O(n^{2.80735})$,
|
||||||
is $O(n^{2.37})$.
|
was published in 1969 \cite{str69}, and
|
||||||
However, such special algorithms are not needed
|
the best current algorithm
|
||||||
|
works in $O(n^{2.37286})$ time \cite{gal14}.},
|
||||||
|
but they are mostly of theoretical interest
|
||||||
|
and such special algorithms are not needed
|
||||||
in competitive programming.
|
in competitive programming.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Matrix power}
|
\subsubsection{Matrix power}
|
||||||
|
|
||||||
\index{matrix power}
|
\index{matrix power}
|
||||||
|
|
|
@ -416,12 +416,7 @@ which is convenient, because operations with 32 and 64
|
||||||
bit integers are calculated modulo $2^{32}$ and $2^{64}$.
|
bit integers are calculated modulo $2^{32}$ and $2^{64}$.
|
||||||
However, this is not a good choice, because it is possible
|
However, this is not a good choice, because it is possible
|
||||||
to construct inputs that always generate collisions when
|
to construct inputs that always generate collisions when
|
||||||
constants of the form $2^x$ are used.
|
constants of the form $2^x$ are used \cite{pac13}.
|
||||||
% \footnote{
|
|
||||||
% J. Pachocki and Jakub Radoszweski:
|
|
||||||
% ''Where to use and how not to use polynomial string hashing''.
|
|
||||||
% \textit{Olympiads in Informatics}, 2013.
|
|
||||||
% }.
|
|
||||||
|
|
||||||
\section{Z-algorithm}
|
\section{Z-algorithm}
|
||||||
|
|
||||||
|
@ -433,7 +428,7 @@ gives for each position $k$ in the string
|
||||||
the length of the longest substring
|
the length of the longest substring
|
||||||
that begins at position $k$ and is a prefix of the string.
|
that begins at position $k$ and is a prefix of the string.
|
||||||
Such an array can be efficiently constructed
|
Such an array can be efficiently constructed
|
||||||
using the \key{Z-algorithm}.
|
using the \key{Z-algorithm} \cite{gus97}.
|
||||||
|
|
||||||
For example, the Z-array for the string
|
For example, the Z-array for the string
|
||||||
\texttt{ACBACDACBACBACDA} is as follows:
|
\texttt{ACBACDACBACBACDA} is as follows:
|
||||||
|
|
|
@ -314,7 +314,9 @@ because both cases take a total of $O(n \sqrt n)$ time.
|
||||||
|
|
||||||
\index{Mo's algorithm}
|
\index{Mo's algorithm}
|
||||||
|
|
||||||
\key{Mo's algorithm} can be used in many problems
|
\key{Mo's algorithm} \footnote{According to \cite{cod15}, this algorithm
|
||||||
|
is named after Mo Tao, a Chinese competitive programmer. However,
|
||||||
|
the technique has been appeared earlier in the literature \cite{ken06}.} can be used in many problems
|
||||||
that require processing range queries in
|
that require processing range queries in
|
||||||
a \emph{static} array.
|
a \emph{static} array.
|
||||||
Before processing the queries, the algorithm
|
Before processing the queries, the algorithm
|
||||||
|
|
Loading…
Reference in New Issue