A – Repeat ACL
Easy programming-language-knowledge check typical of the first task of AtCoder beginner contests. The following is a passing Kotlin submission:
fun main() {
val k = readLine()!!.toInt()
val ans = "ACL".repeat(k)
println(ans)
}
B – Integer Preference
Assume there is an integer in both ranges and call it $$$x$$$. Note that both $$$x \ge \max(A, C)$$$ and $$$x \le \min(B, D)$$$ must therefore be true.
Thus $$$x$$$ exists if and only if $$$\max(A, C) \le \min(B, D)$$$ is true. This is the standard formula for finding the overlap of two ranges.
C – Connect Cities
Note that the initial network of cities can be divided into one or more components, where any city of a component can reach any other city in that component following roads, but there are no roads connecting components together.
If there are $$$c$$$ components, Snuke can connect two of them by choosing one city from each component and building a road between them, which combines them to a single component; so the answer would be $$$c - 1$$$.
Components can be found using breadth-first/depth-first searches, however there's an easier way if you already have a special data structure known as DSU (disjoint-set union). And there is an implementation available right in the dsu
module of the AtCoder library. So start with $$$c := N$$$, for each road given in the input, you can use same
to check if the vertices are already connected; if they aren't, merge
them and decrement $$$c$$$.
Note that though the input is $$$1$$$-indexed, the AtCoder DSU implementation is $$$0$$$-indexed.
D – Flat Subsequence
Imagine an array $$$B$$$, initially filled with zeros. The answer then can be theoretically obtained with the following algorithm:
For each $$$a$$$ in $$$A$$$:
- Find $$$\displaystyle\max _{j = a-k} ^{a+k} B_j$$$. Let it be $$$r$$$
- Set $$$B_a := r+1$$$. This works because you are finding the longest subsequence that can include $$$a$$$, and then extending it.
Then the final answer is $$$\max B_i$$$.
Unfortunately this is too slow as searching for the maximum in a range naively is $$$O(K)$$$, thus making the final complexity $$$O(NK)$$$. However, there is a special data structure that solves this problem, called a segment tree.
Segment trees work by storing the array data in the leaves of a fixed binary tree, then storing a "sum" in the ancestor nodes of those leaves. Updating a single entry is slower than a normal array as $$$O(\log n)$$$ ancestors need to be updated, but querying the "sum" of a range becomes much faster, as only $$$O(\log n)$$$ nodes need to be accessed, and the data "summed".
The scare quotes around "sum" imply that it's a more general term — segment trees can indeed implement sums, but more generally, it can implement any monoid. Basically, it's a closed binary operation $$$(S \oplus S) \rightarrow S$$$ that is both associative and has an identity element.
As you are only storing non-negative integers, the $$$\max$$$ operator indeed has an identity element, $$$0$$$. Even if you needed to store negative integers, you can typically set the identity to any number lower than anything else you'd use (just like how you might use a large number as "infinity" in DP/greedy problems)
The segtree
module in the AtCoder library is an implementation of a segment tree. Be sure to read the documentation carefully; you need to set the operator and identity element. Ranges should be entered in half-open form, and be careful of accidentally exceeding the index limits in queries.
There is also a way to solve this using only an ordered map.
E – Replace Digits
Precalculate $$$10^x$$$ (under modulo, of course) for $$$0 \le x \le N$$$. Also precalculate $$$\displaystyle ones(x) = \sum _{j=0} ^{x-1} 10^j$$$ for $$$0 \le x \le N$$$, representing the value of a string of all ones of length $$$x$$$.
This problem should remind you a lot of the previous one, however you now have to also update on a range. This can be done using a lazy-propagating segment tree. It's "lazy" because range updates are not all done straight away, but could be "queued" up in an ancestor node, only being "pushed" toward the leaves when needed for queries.
There is a lazysegtree
in the AtCoder library as well. Configuring it for this problem is a little more involved than the previous one:
The monoid operator isn't a simple sum, but requires that the size of the segment to be known as well. Thus $$$S$$$ should be a tuple $$$(sum, size)$$$, with the operator $$$(sum_a, size_a) \oplus (sum_b, size_b) = (sum_a \cdot 10^{size_b} + sum_b, size_a + size_b)$$$. Its identity, $$$e = (0, 0)$$$. The $$$sum$$$ terms should be under modulo.
$$$F$$$, the mapping, can be a simple integer indicating the digit to update with, however a "null / identity map" needs to be assigned as well, this could be $$$id = -1$$$. $$$composition(g, f)$$$ should be $$$g$$$ (the newer update is on the left as in the traditional notation $$$(g \circ f)(x) = g(f(x))$$$). $$$mapping(f, s)$$$ should be $$$s$$$ if $$$f = id$$$, and otherwise $$$(f \cdot ones(size_s), size_s)$$$
The rest is then just implementing the task. Note that when initializing the segment tree, the value $$$(1, 1)$$$ needs to be set individually to all indices; you can't just queue a $$$1$$$ range-update as they start out as the empty segment $$$(0, 0)$$$.
You can also still solve this with an ordered map.
F – Random Max
Thanks to Everule for help with this problem.
Let's denote the random variates with $$$X_i$$$ (capital X) to avoid confusion with the parametric variable $$$x$$$ in the following notations.
Let $$$F(x) := \Pr[\max X \leq x]$$$, the cumulative distribution function (CDF) of the maximum variate. There is a formula for deriving the expected value from the CDF of a random variate:
The latter integral can be ignored as $$$F(x) = 0$$$ for all $$$x \leq 0$$$. We can also adjust the upper bound of the left integral to $$$\max R$$$ as $$$F(x) = 1$$$ when $$$x \geq \max R$$$. Thus we may simplify:
Now let's figure out the behavior of $$$F(x)$$$. We may note that it's the product of the CDFs of all the individual random variates:
Let $$$f_i(x) := \Pr[X_i \leq x]$$$. Let's decompose $$$f_i(x)$$$, the CDF for a single variate:
Note that the middle expression is a degree-$$$1$$$ polynomial. To analyze the behavior of $$$F(x)$$$ we can imagine a sweep line on decreasing $$$x$$$. When $$$x$$$ touches the $$$R_i$$$ of one of the given intervals, its corresponding $$$f_i(x)$$$ "switches on" and contributes its middle expression to the product. Then as soon as $$$x$$$ touches $$$\max L$$$ one of the CDFs goes to $$$0$$$, which sets the product to $$$0$$$ for good. (This matches intuition, as the maximum variate could never be less than $$$\max L$$$). From this we can conclude that $$$F(x)$$$ is a piecewise function, with each piece being describable by a polynomial. We can integrate each piece individually, and their sum would be the integral of $$$F(x)$$$.
What does this mean in terms of implementation? We could at first set variable $$$E := \max R$$$, and sort all intervals in non-increasing (descending) order of $$$R_i$$$ (henceforth we assume $$$R_i \geq R_{i+1}$$$ for all $$$1 \leq i \leq N-1$$$). Let $$$m$$$ denote the number of intervals where $$$R_i > \max L$$$. Let $$$S_{1..m} := R_{1..m}$$$, and $$$S_{m+1} := \max L$$$, which together denote the change-points of $$$F(x)$$$.
Now let $$$p_0(x)$$$ be the polynomial $$$1$$$ (storing all polynomials as arrays of coefficients). As you iterate $$$i$$$ over $$$[1, m]$$$, construct the new polynomial:
Then integrate this polynomial over $$$[S_{i+1}, S_{i}]$$$ (the formula being derivable from the power rule for integrating polynomials):
Subtract each integral from $$$E$$$ and we have our desired expected value. Don't forget to multiply by the required scaling factor $$$(N+1)! \cdot \prod_{i=1}^N (R_i - L_i)$$$ before outputting.
This works in $$$\tilde O(N^2)$$$ as the polynomial increases in length by $$$1$$$ each iteration, taking about $$$\tilde O(n)$$$ to multiply and to integrate. The tildes (~) over the O's indicate hidden $$$\log$$$ factors from modular inverses and exponentiation. It can also be optimized to strict $$$O(N^2)$$$ using the multiple inverse trick and by accumulating the powers instead of exponentiating each time.