Improve grammar and language style in chapter 2

This commit is contained in:
Roope Salmi 2017-02-23 03:07:40 +02:00
parent 1bf55961f6
commit dc4e33512a
1 changed files with 20 additions and 20 deletions

View File

@ -16,7 +16,7 @@ for some input.
The idea is to represent the efficiency
as an function whose parameter is the size of the input.
By calculating the time complexity,
we can find out whether the algorithm is good enough
we can find out whether the algorithm is fast enough
without implementing it.
\section{Calculation rules}
@ -97,7 +97,7 @@ for (int i = 1; i <= n; i++) {
\subsubsection*{Phases}
If the code consists of consecutive phases,
If the algorithm consists of consecutive phases,
the total time complexity is the largest
time complexity of a single phase.
The reason for this is that the slowest
@ -211,15 +211,15 @@ $n$ must be divided by 2 to get 1.
A \key{square root algorithm} is slower than
$O(\log n)$ but faster than $O(n)$.
A special property of square roots is that
$\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies
in some sense in the middle of the input.
$\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies,
in some sense, in the middle of the input.
\item[$O(n)$]
\index{linear algorithm}
A \key{linear} algorithm goes through the input
a constant number of times.
This is often the best possible time complexity,
because it is usually needed to access each
because it is usually necessary to access each
input element at least once before
reporting the answer.
@ -281,13 +281,13 @@ Still, there are many important problems for which
no polynomial algorithm is known, i.e.,
nobody knows how to solve them efficiently.
\key{NP-hard} problems are an important set
of problems for which no polynomial algorithm is known \cite{gar79}.
of problems, for which no polynomial algorithm is known \cite{gar79}.
\section{Estimating efficiency}
By calculating the time complexity of an algorithm,
it is possible to check before
implementing the algorithm that it is
it is possible to check, before
implementing the algorithm, that it is
efficient enough for the problem.
The starting point for estimations is the fact that
a modern computer can perform some hundreds of
@ -305,7 +305,7 @@ we can try to guess
the required time complexity of the algorithm
that solves the problem.
The following table contains some useful estimates
assuming that the time limit is one second.
assuming a time limit of one second.
\begin{center}
\begin{tabular}{ll}
@ -322,7 +322,7 @@ $n \le 10$ & $O(n!)$ \\
\end{center}
For example, if the input size is $n=10^5$,
it is probably expected that the time
it should probably be expected that the time
complexity of the algorithm is $O(n)$ or $O(n \log n)$.
This information makes it easier to design the algorithm,
because it rules out approaches that would yield
@ -347,7 +347,7 @@ for solving a problem such that their
time complexities are different.
This section discusses a classic problem that
has a straightforward $O(n^3)$ solution.
However, by designing a better algorithm it
However, by designing a better algorithm, it
is possible to solve the problem in $O(n^2)$
time and even in $O(n)$ time.
@ -415,10 +415,10 @@ the following subarray produces the maximum sum $10$:
\subsubsection{Algorithm 1}
A straightforward algorithm to the problem
is to go through all possible ways to
select a subarray, calculate the sum of
numbers in each subarray and maintain
A straightforward algorithm to solve the problem
is to go through all possible ways of
selecting a subarray, calculate the sum of
the numbers in each subarray and maintain
the maximum sum.
The following code implements this algorithm:
@ -473,12 +473,12 @@ After this change, the time complexity is $O(n^2)$.
Surprisingly, it is possible to solve the problem
in $O(n)$ time, which means that we can remove
one more loop.
The idea is to calculate for each array position
The idea is to calculate, for each array position,
the maximum sum of a subarray that ends at that position.
After this, the answer for the problem is the
maximum of those sums.
Condider the subproblem of finding the maximum-sum subarray
Consider the subproblem of finding the maximum-sum subarray
that ends at position $k$.
There are two possibilities:
\begin{enumerate}
@ -517,7 +517,7 @@ It is interesting to study how efficient
algorithms are in practice.
The following table shows the running times
of the above algorithms for different
values of $n$ in a modern computer.
values of $n$ on a modern computer.
In each test, the input was generated randomly.
The time needed for reading the input was not
@ -539,9 +539,9 @@ $10^7$ & > $10,0$ s & > $10,0$ s & $0{,}0$ s \\
The comparison shows that all algorithms
are efficient when the input size is small,
but larger inputs bring out remarkable
differences in running times of the algorithms.
differences in the running times of the algorithms.
The $O(n^3)$ time algorithm 1 becomes slow
when $n=10^4$, and the $O(n^2)$ time algorithm 2
becomes slow when $n=10^5$.
Only the $O(n)$ time algorithm 3 processes
Only the $O(n)$ time algorithm 3 is able to process
even the largest inputs instantly.