PurpleThinker's blog

By PurpleThinker, 21 month(s) ago, In English

At my university, we are doing this semester the algorithm course. The course is easy for me, since I know most of the course syllabus because of my high-school CP background. However, yesterday I learnt at this course a quite surprising thing. We discussed about time complexities: the big $$$O$$$, $$$\Omega$$$, and $$$\Theta$$$. Take a look at this (mostly) formal definition of $$$O$$$:

Spoiler

Notice that any function $$$g$$$ works, as long as it's $$$\leq$$$ than $$$f$$$ for large enough $$$n$$$. So the big $$$O$$$ is just an upper bound for the number of operations of an algorithm, not necessarily the tightest upper bound for the complexity of an algorithm. In other words, when you say "the number of operations of this algorithm is $$$O(n^2)$$$", it actually means that the algorithm does not do significantly more operations than $$$n^2$$$. That means that not only an algorithm with $$$n^2$$$ steps is $$$O(n^2)$$$, but also algorithms with $$$n \space log \space n$$$ or $$$n$$$ operations are also $$$O(n^2)$$$!

The notation that actually corresponds to the worst-case running time we are talking about in CP seems to be the $$$\Theta$$$ (big theta) notation.

Formal definition of big theta

For example, binary search does in the worst case $$$\Theta(log \space n)$$$ steps.

So my question is: why are we using the big-O notation for time complexity in CP, when it's not fully accurate? Or have I misunderstood something about these notations? I don't want to be pedantic, but I'm geniunely wondering why we are doing this.

  • Vote: I like it
  • +84
  • Vote: I do not like it

»
21 month(s) ago, # |
  Vote: I like it +7 Vote: I do not like it

A guess: Because the CPs require std to ensure (or almost 100%) pass all test points that meet the limit, people are only interested in the worst case in most cases in CPs.

»
21 month(s) ago, # |
  Vote: I like it +190 Vote: I do not like it

People do use $$$\Theta$$$ in CP, if they want to emphasize that it is the exact complexity. But usually, we only care about the solution being fast enough and don't care about proving lower bounds on running time. In most cases both $$$O$$$ and $$$\Omega$$$ are obvious, but when they aren't, it doesn't make sense to strictly prove $$$\Omega$$$ (and thus $$$\Theta$$$), because it doesn't help you to get AC. That's why we use $$$O$$$. Also historic reasons I guess.

»
21 month(s) ago, # |
  Vote: I like it +92 Vote: I do not like it

Maybe one reason is O is much easier to type than Θ.

»
21 month(s) ago, # |
  Vote: I like it -13 Vote: I do not like it

You say "binary search does in the worst case $$$\mathcal{\Theta} (\log n)$$$ steps."

Informally, $$$\Theta$$$ gives you a lower bound on the best case, not the worst case. Binary search can terminate on the first iteration and only require one comparison. This is not $$$\Omega(\log n)$$$, so binary search is not $$$\Theta(\log n)$$$. Wikipedia corroborates this.

If you say that something is worst-case $$$\Theta(f(n))$$$, you are in effect asserting the performance is always $$$f(n)$$$ up to constant factors. Therefore, saying worst-case in conjunction with $$$\Theta(f(n))$$$ is either redundant or incorrect. I'm not sure if this is me nitpicking your statement or your misunderstanding, but either way, the original quoted statement is still incorrect.

The point of big-Oh is to provide a tight asymptotic upper bound. Note that we almost never use $$$o(f(n))$$$ for bounds on algorithms (hopefully your course explains little-Oh).

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +45 Vote: I do not like it

    I don't think using big-theta notation for worst-case analysis is necessarily incorrect. $$$f\left(n\right) = Θ\left(g\left(n\right)\right)$$$ is well-defined at least for any functions $$$f, g : \mathbb{N} \rightarrow \mathbb{R}^+$$$. If $$$f\left(n\right)$$$ is defined as the maximum number of units of time an algorithm can take on any input of size $$$n$$$, then saying the algorithm is worst-case $$$Θ\left(g\left(n\right)\right)$$$ just means $$$f\left(n\right) = Θ\left(g\left(n\right)\right)$$$. This is only a statement about the worst-case inputs to an algorithm, so binary search would be $$$Θ\left(\log n\right)$$$ in the worst case: for every $$$n$$$, the worst-case input of size $$$n$$$ causes binary search to take about $$$\log n$$$ steps.

  • »
    »
    21 month(s) ago, # ^ |
    Rev. 2   Vote: I like it +62 Vote: I do not like it

    I think that zero things you said are true? Of course, algorithmic understanding of big-O is different from analysis, and might not be the same everywhere, but...

    I would certainly say that binary search is $$$\Theta(\log n)$$$ in the worst case (and also implementing it in the way that it has any chance of working in constant time is stupid, but that's a separate point).

    If you say that something is worst-case $$$\Theta(f(n))$$$, it means that
    - There exist constants $$$c_1 > 0$$$ and $$$N$$$ s.t. number of operations performed by the algorithm on any input of size $$$n \ge N$$$ is not greater than $$$c_1 f(n)$$$;
    - There exist a constant $$$c_2 > 0$$$ and an infinite increasing subsequence of sizes of inputs $$$n_1 < n_2 < n_3 < \ldots $$$ s.t. for every $$$n_k$$$ there exists an input of size $$$n_k$$$ s.t. the number of operations performed by the algorithm on that input is not less than $$$c_2 f(n_k)$$$.

    The point of big-O is to provide an asymptotic upper bound. Would you say that the statement "the fastest known deterministic primality check works in $$$O(n^c)$$$ for any $$$c > 0$$$" is incorrect? Note that we do use $$$o(f(n))$$$ for bounds on algorithms, for example, it is reasonable to say "can we do the query in $$$o(\sqrt{n})$$$?" if you only have sqrt-decomposition solution but are looking for something better.

  • »
    »
    21 month(s) ago, # ^ |
    Rev. 3   Vote: I like it +62 Vote: I do not like it

    Worst case / best case / average case is a different ‘dimension’ of complexity analysis and thus is completely independent of $$$\Omega$$$, $$$O$$$, and $$$\Theta$$$.

    Example:

    Quick sort is worst case $$$\Theta(n^2)$$$ comparisons. Meaning: for any ‘big enough’ $$$n$$$ the maximum number of comparisons over all inputs of size $$$n$$$ is bounded by two quadratic functions (max number of comparisons is probably $$$n(n-1)/2$$$ and $$$n^2/4 \leq n(n-1)/2 \leq n^2$$$ for big enough $$$n$$$).

    Quick sort is average case $$$\Theta(n \log n)$$$ comparisons. Meaning: for any ‘big enough’ $$$n$$$ the expected number of comparisons over all inputs of size $$$n$$$ under a uniform probability field is bounded by two $$$c_1(n \log n) \leq E(n) \leq c_2(n \log n)$$$

    Quick sort is best case $$$\Theta(n)$$$ comparisons. Meaning: for any ‘big enough’ $$$n$$$ the minimum number of comparisons over all inputs of size $$$n$$$ is bounded by two linear functions.


    Consider now a more complicated algorithm, like Dinic’s maximum flow. We know that the number of level graphs encountered in Dinic is at most $$$n$$$ and each blocking flow has at most $$$m$$$ augmentation paths and each augmentation path is found in amortized $$$O(n)$$$, so we can know for sure that the maximum number of operations over all graphs of $$$n$$$ vertices and $$$m$$$ edges is bounded by some function $$$cn^2m$$$ and, hence, the worst case performance is $$$O(n^2 m)$$$, but this is not a tight bound. We haven’t proved that it’s not $$$O(nm)$$$ for instance.

    My point being, it’s sometimes hard to prove tight bounds for some non-trivial algorithms, but $$$O(\cdot)$$$ bounds are still useful to tell us how these algorithms ‘scale’ up (to borrow some trendy industry buzzwords).


    Since I’m nit-picky, I should mention that in the first example, I’m talking about deterministic quick sort (where the pivot is selected via a deterministic algorithm). For randomized quick sort, the worst case complexity is considered $$$\Theta(n \log n)$$$ with probability $$$1$$$ afaik, which maybe makes it even more confusing.

»
21 month(s) ago, # |
  Vote: I like it -19 Vote: I do not like it

where is that chromate00 guy, he knows how to do inverse greedy dynamic FFT in O(n^1.4 * log_27.89573478(﷽)), so i'm sure he can answer this very important question of yours

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Don't use religious words in your sarcasm! it's very offensive!

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    excuse me I don't understand any single phrase out of this sentence wtf

»
21 month(s) ago, # |
Rev. 2   Vote: I like it +34 Vote: I do not like it

Well, I know that some of the following lines may not be factually correct, but I thought it would be good to note this.

Sometimes, it is not easy to prove some upper bounds of an algorithm's time complexity. Most of the time we have an algorithm, and we know an upper bound of its time complexity. Sometimes we think we may be able to prove a tighter bound, but it is not always so easy to do so. Let's use 1790F - Timofey and Black-White Tree as an example. The official editorial explains a solution, and states that it has a time complexity of $$$O(n \sqrt n)$$$. The proof is also stated, so this is a proven bound. However, some people also think that this same solution is $$$O(n \log n)$$$. It would be significantly harder to prove this bound, compared to the previous $$$O(n \sqrt n)$$$ bound. But still, if we can prove a bound equal to or better than what the official solution thinks as "good enough", then we are safe.

These kinds of situations may/will be found from time to time in CP. Sometimes, even the problemsetter can't find a very optimal solution of some problem. But if they know some bounds they can reach, and think that is "good enough", then that very well makes a good task indeed.