Thanks for participation!
The official solution of E is $$$O(n)$$$. If your solution of E has a larger complexity, I recommend reading the tutorial.
How many operations will we perform?
At most one. Why?
Suppose we can only perform exactly one operation. In this case the answer is $$$S=\max_{1\le i\le n}(a_i\mathrm{\ or\ }z)$$$. In fact, we can prove that this is the answer.
Define $$$a_i'$$$ as the value of $$$a_i$$$ after some operations.
It suffices to prove the answer will never exceed $$$S$$$. Note that $$$z$$$ will always become a submask of itself after any number of operations, so $$$a_i$$$ will always be a submask of $$$(a_i\mathrm{\ or\ }z)$$$ after any number of operations. This leads to the conclusion that $$$a_i'\le (a_i\mathrm{\ or\ }z)$$$ for all $$$i$$$. Thus $$$\max_{1\le i\le n} a_i'\le \max_{1\le i\le n}(a_i\mathrm{\ or\ }z)=S$$$.
Time complexity is $$$O(n)$$$.
1696B - NIT Destroys the Universe
How many operations will we perform?
At most two. Why?
How to check if the array can be destroyed in $$$0$$$ or $$$1$$$ operations?
The answer is at most $$$2$$$, because doing the operation on $$$[1,n]$$$ at most twice will always work. (If the array contains at least one zero, we need $$$2$$$ operations. Otherwise we need $$$1$$$ operation.)
If the array consists of zeroes, the answer is $$$0$$$.
If all non-zero elements form a contiguous segment in the array, the answer is $1$. To check this, you can find the leftmost and rightmost occurrence of non-zero elements and check if elements in the middle of them are also non-zero.
Otherwise the answer is $$$2$$$.
Time complexity is $$$O(n)$$$.
1696C - Fishingprince Plays With Array
The operation is reversible. (The two operations are reverses of each other.)
Try to find a middle state, such that we can turn both $$$a$$$ and $$$b$$$ into it.
Call the first operation "expand" and the second operation "shrink".
Keep doing expand on both arrays until we can't do expand anymore, call the resulting arrays $$$a'$$$ and $$$b'$$$. It suffices to check if $$$a'=b'$$$. To implement this, you need to compress contiguous equal numbers.
Proof of why this is necessary and sufficient:
Sufficiency is obvious, since the operations are reversible. We can do something like $$$a\to a'=b'\to b$$$.
Necessity: Let $$$f(a)=a'$$$. It suffices to prove that an operation on $$$a$$$ does not affect $$$f(a)$$$. An expand operation obviously doesn't affect $$$f(a)$$$. A shrink operation shrinks $$$a[i,i+m-1]$$$ into one element. When computing $$$f(a')$$$, we will always expand $$$a'_i$$$ at some time, so the result is the same as $f(a)$.
Time complexity is $$$O((n+k)\log_m V)$$$, where $$$V=\max a_i$$$.
This problem has two different solutions. The first one is more beautiful, but less straight-forward.
The solution is $$$O(n)$$$. We don't need any data structures.
Instead of trying to construct the shortest path from $$$1$$$ to $$$n$$$, find a "transfer vertex" that we must pass through.
We will always pass through the position of the maximum element in the array.
Suppose the maximum element is $$$a_k$$$. Solve recursively for $$$dis(1,k)$$$ and $$$dis(k,n)$$$.
Denote $$$dis(x,y)$$$ as the length of the shortest path between $$$x$$$ and $$$y$$$.
Consider a position $$$i$$$ that $$$a_i=n$$$. Assume $$$i\ne 1$$$ and $$$i\ne n$$$. For a segment that passes $$$i$$$, its maximum element is always $$$a_i$$$. Thus, for $$$x<i<y$$$, $$$x$$$ and $$$y$$$ will never be directly connected by an edge. This means that when going from $$$1$$$ to $$$n$$$, we have to pass $$$i$$$. Let us solve recursively for $$$dis(1,i)$$$ and $$$dis(i,n)$$$. For example, we solve for $$$dis(1,i)$$$.
We already know that $$$a_i=n$$$, so $$$i$$$ is the maximum element in $$$[1,i]$$$. Consider the minimum element in $$$[1,i]$$$, suppose it is $$$a_j\ (j<i)$$$. From similar arguments, we can solve recursively for $$$dis(1,j)$$$ and $$$dis(j,i)$$$. However, note that $$$dis(j,i)$$$ equals to $$$1$$$: since $$$j$$$ and $$$i$$$ are respectively minimum and maximum in $$$[1,i]$$$, they have to be minimum and maximum in $$$[j,i]$$$ as well. So $$$i,j$$$ must be directly connected. Thus, we only need to solve recursively for $$$dis(1,j)$$$.
The process with $$$dis(i,n)$$$ is similar. Note that we will only call $$$dis(l,r)$$$ for $$$l=1$$$ or $$$r=n$$$ (if not, the return value is always 1), so it suffices to pre-compute prefix and suffix minimums and maximums.
The time complexity is $$$O(n)$$$.
Look at the last sample test case. Think of a simple greedy.
Keep going to the rightmost vertex (the vertex with the largest id) works.
Use data structures to simulate the process. How?
We can prove that keep going to the vertex with the largest index is a correct strategy. The proof is left as an exercise :) Hint: try to prove that the path we will visit is the same as the path we visited in solution 1.
Suppose we are at $$$i$$$. We want to find the largest $$$j>i$$$ such that $$$i$$$ and $$$j$$$ are directly connected. WLOG, assume $$$a_{i+1}<a_i$$$. Then, it cannot be the case that $$$a_j>a_i$$$, since none of $$$a_i,a_j$$$ will be $$$mn(i,j)$$$. Thus $$$a_j<a_i$$$. It follows that all $$$i<k<j$$$ satisfies $$$a_k<a_i$$$, otherwise none of $$$a_i,a_j$$$ will be $$$mx(i,j)$$$.
Let $$$r_i$$$ be the largest $$$p$$$, such that for all $$$t\in [i+1,p]$$$, $$$a_t<a_i$$$. From the arguments above we know $$$j\in [i+1,r_i]$$$. $$$r_i$$$ can be pre-computed with a stack, or binary search + some data structures.
Let $$$j_0$$$ be the position of the minimum element in $$$[i+1,r_i]$$$. Obviously $$$j_0$$$ is directly connected with $$$i$$$. For any $$$j_0<k\le r_i$$$, $$$mn(i,k)$$$ will be $$$a_{j_0}$$$, showing that all such $$$k$$$ is not directly connected with $$$i$$$. Thus, $$$j_0$$$ is the desired $$$j$$$.
If we use data structures for range minimum, we get a $$$O(n\log n)$$$ solution, which can easily pass (not sure whether $$$O(n\log^2 n)$$$ ones will pass though, the large constraints were intended to cut those).
However, by building the cartesian tree of the array and doing proper pre-computations, we can optimize this solution to $$$O(n)$$$.
Try to find out the number of operations we do on a specific cell $$$(i,j)$$$, call it $$$f(i,j)$$$.
Write the recurrence formula for $$$f(i,j)$$$. What is $$$f(i,j)$$$?
$$$f(i,j)=\binom{i+j}j$$$
The answer is the sum of $$$f(i,j)$$$ over all white cells $$$(i,j)$$$. Use some combinatorics formula to speed it up.
Let us find out the number of operations we do on a specific cell $$$(i,j)$$$, call it $$$f(i,j)$$$.
Every operation done on $$$(i-1,j)$$$ will lead to one doll on $$$(i,j)$$$, thus consuming one operation on $$$(i,j)$$$. Similar observation holds for $$$(i,j-1)$$$.
Thus, $$$f(i,j)=f(i,j-1)+f(i-1,j)$$$ (given that $$$(i,j),(i-1,j),(i,j-1)$$$ are all white cells). Note that $$$a$$$ is non-increasing: this means that if $$$(i,j)$$$ is white, $$$(i-1,j),(i,j-1)$$$ will both be white. So we can conclude that $$$f(i,j)=f(i,j-1)+f(i-1,j)$$$ always holds as long as $$$(i,j)$$$ is white.
Another way to see the formula is $$$f(i,j)$$$ is the number of ways to go from $$$(0,0)$$$ to $$$(i,j)$$$, only going down or right by 1 step. This implies that $$$f(i,j)=\binom{i+j}j$$$.
From this, we know that the answer is $$$\sum_{i=0}^n\sum_{j=0}^{a_i-1} \binom{i+j}{i}$$$. With the equation $$$\sum_{i=0}^k\binom{n+i}n=\binom{n+k+1}{n+1}$$$, we know that the answer is $$$\sum_{i=0}^n\binom{i+a_i}{i+1}$$$.
The time complexity is $$$O(n+V)$$$, where $$$V=\max a_i$$$.
The solution does not contain painful casework and deadly implemention.
Suppose we aleady know edge $$$(i,j)$$$ exists in the tree. What can we know from it?
We can immediately recover the whole tree.
Read the hints first to understand the solution better.
Construct a graph with $$$\binom n2$$$ vertices $$$(1,2),(1,3),\dots,(n-1,n)$$$.
If $$$dis(a,b)=dis(b,c)$$$, link an undirected edge between $$$(a,b)$$$ and $$$(b,c)$$$.
Observe that all edges in the tree form a connected component of size exactly $$$n-1$$$ in the graph!
Find all components of size $$$n-1$$$ and try if all vertices in it form a tree that satisfy the input. There are at most $$$\dfrac n2$$$ such components, so complexity is $$$O(n^4)$$$. Proper pre-computation and the usage of bitsets can reduce the complexity to $$$O(n^4/w)$$$.
1696G - Fishingprince Plays With Array Again
What kind of problem is this problem?
Linear programming.
Consider the dual.
Consider the case when $$$n=2$$$. Draw the linear programming on a xOy-coordinate. Try to observe what the answer might be.
First we solve the problem with only 1 query on the whole array $$$A$$$.
This is a linear programming problem:
Consider its dual:
Suppose $$$X\le Y$$$. Now we will prove that there exists an optimal solution to the dual problem, in which $$$x_i$$$ can only take three values: $$$1/Y,1/(X+Y),0$$$.
The proof is as follows: It is well-known that an optimal solution to a linear programming problem must lie on a vertex of the "multi-dimensional convex polygon" which the restrictions surround. Thus we are only interested in $$$x_i$$$ that satisfy several "=" restrictions (and the restrictions should really intersect at one point, meaning that those "=" should uniquely determine $$$x$$$). Consider any "sufficient" (that means they uniquely determine $$${x}$$$) subset of them. If one restriction is related to $$$x_p,x_q$$$, we link an undirected edge between $$$p$$$ and $$$q$$$. If one restriction is only related to $$$x_p$$$ (i.e. $$$x_p=0$$$), we link a self-loop on $$$p$$$. "Being sufficient" means that all connected components in the graph has exactly one cycle. However, for an edge $$$(u,v)$$$, we know that either $$$u=v+1$$$ or $$$u=v$$$. This means that all cycles can only be $$$(i\to i+1\to i)$$$ or $$$i\to i$$$. If a cycle is $$$(i\to i+1\to i)$$$, all $$$x_i$$$ in the component are $$$1/(X+Y)$$$; If a cycle is $$$i\to i$$$, all $$$x_i$$$ in the component are $$$1/Y$$$ or $$$0$$$ (not $$$1/X$$$, because it exceeds the constraints).
Thus we can use dp to solve the dual problem. Let $$$dp(i,0/1/2)$$$ be the maximum $$$\sum_{j\le i}A_jx_j$$$ when $$$x_i$$$ is the $$$0/1/2$$$-th candidate above. Transitions are straight-forward.
For multiple queries, the transitions can be written into multiplying matrices, and we can use segment tree to maintain updates.
About precision issues: actually we can avoid using floating point numbers completely. Note that all numbers in this problem are fractions with denominator $$$Y(X+Y)$$$. Also note that the answer does not exceed $$$(\sum a_i)/Y$$$. This means that the numerator does not exceed $$$(\sum a_i)\times (X+Y)<10^{18}$$$, so we can use long long-s to only store numerators. If you use double in C++, the relative error of one operation is less than $$$10^{-15}$$$. $$$10^{-15}\times n<10^{-9}$$$, which means that using doubles is also enough.
Complexity: $$$O(n+q\log n)$$$.
Find a way to calculate the maximum product that can be turned into counting.
Use monotonicity to reduce the complexity.
First, we describe a strategy to find the answer for a single subset. If the whole subset is negative, the answer is the product of the $$$K$$$ maximum numbers in it. Otherwise, take $$$K$$$ numbers with the maximum absolute value. If there is an even number of negative numbers in those numbers, we're done. Otherwise, find the smallest positive element and change it to the absolute-value-maximum negative element unchosen, or find the largest negative element and change it to the maximum positive element unchosen. We either do the first change or do the second change.
This gives a direct dp solution. Take all $$$a_i$$$ and sort them into two arrays consisting of positive and negative ones (0 goes to arbitary one), $$$pos$$$ and $$$neg$$$, sorted by absolute values. By calculating $$$fpos(i,k)$$$: the sum of the product of the largest $$$K$$$ numbers of all subsets of $$$pos[1\dots i]$$$ that contain $$$pos_i$$$, and $$$fneg(i,k)$$$ similarly, and $$$gneg(i,k)$$$, the sum of the product of the absolute value smallest $$$K$$$ numbers of all subsets of $$$neg[i\dots |neg|]$$$ that contain $$$neg_i$$$, and enumerating the following, we can solve the original problem in $$$O(n^5)$$$:
- the number of positive elements in the $$$K$$$ numbers with the maximum absolute value in our calculating subset, $$$p$$$.
- the smallest positive element in the $$$K$$$ numbers with the maximum absolute value, $$$pos_i$$$.
- the greatest negative element in the $$$K$$$ numbers with the maximum absolute value, $$$neg_j$$$.
- (if $$$p$$$ is odd) the greatest positive element not in the $$$K$$$ numbers with the maximum absolute value, $$$pos_k$$$.
- (if $$$p$$$ is odd) the smallest negative element not in the $$$K$$$ numbers with the maximum absolute value, $$$pos_l$$$.
The contribution to the answer can be represented as the sum of product of $$$fpos,fneg$$$, and powers of two. I left the details as an exercise.
However, notice that the "enumerating $$$k,l$$$" part has nothing to do with $$$p$$$, so we can pre-calculate the contribution of $$$k,l$$$ for every pair $$$(i,j)$$$, giving an $$$O(n^4)$$$ algorithm.
What's more, for fixed $$$i,j,k$$$, the $$$l$$$-s that we do the first change is a prefix/suffix of all $$$l$$$, and $$$l$$$-s that we do the second change is a prefix/suffix of all $$$l$$$. So with two pointers and prefix sums, we can pre-calculate the contribution of every $$$(i,j)$$$ in $$$O(n^3)$$$, which is fast enough.
You might be afraid that $$$600^3$$$ is too slow for 1.5 seconds. However, the two $$$O(n^3)$$$ parts of the algorithm actually run in $$$O(n\times cntpos\times cntneg)$$$ ($$$cntpos,cntneg$$$ are the number of positive/negative integers in the array), leading to an at most $$$1/4$$$ constant factor. Thus, the algorithm actually runs very fast (less than 0.25 seconds). However for similar constant factor reasons, the $$$O(n^4)$$$ solution only takes about 6.5 seconds on Codeforces, so we had to set a seemingly-tight limit.