Thank you everyone for participating in The 3rd Chromate Cup Algorithm Division! The full problemset can be accessed on (link) for upsolving. Also the profile badge/backgrounds are being a bit delayed, I am too busy ;-;
Simulate until you feel confident, e.g. until $$$n=10$$$. What do you see? I think the result should give you confidence.
Let us think step by step starting from $$$n=1$$$.
- If $$$n=1$$$, $$$n=l$$$ is true, therefore no swap will happen.
- If $$$n=2$$$, $$$n=l$$$ is true, therefore no swap will happen.
- If $$$n=3$$$, $$$l=1$$$, so $$$B_1$$$ and $$$B_3$$$ are swapped. Now $$$1$$$ is on index $$$3$$$.
- If $$$n>3$$$, $$$l$$$ is a power of $$$2$$$, and is therefore not $$$3$$$. The index of $$$1$$$ never changes afterwards.
Therefore, if $$$n\le 2$$$, the answer is $$$1$$$. Otherwise, the answer is $$$3$$$.
An upper bound is given on the examples.
You may simply implement what is written on the description. If we had to write the size of the $$$n$$$-th super prime with the Big-O notation, it would be $$$\mathcal{O}(n \log^2 n)$$$, but such a technical analysis was not required due to the fact that we know the largest answer is $$$318 \, 137$$$ thanks to the examples. The intended solution uses the sieve of eratosthenes, but a solution using $$$\mathcal{O}(\sqrt{x})$$$ primality tests may also pass if it has small enough constants.
Fix the root of Y. The cardinality of Y contributed by this specific root is now easy to see.
After fixing the root in the graph, it is not hard to see that the cardinality of Y is determined by the number of cases to choose the three leaves. When the degree of the root is $$$k$$$, the number of cases in choosing the three leaves is as follows.
Therefore, after we compute the degree sequence, the answer is as follows.
The task did not need an understanding of modular multiplicative inverses; the value of $$${10^5 \choose 3}$$$ simply fits in long long
.
Try to implement what is explicitly written in the statements. It works surprisingly fast.
If we directly implement what is written in the description, the lawnmowing process will take $$$\mathcal{O}(n)$$$ or $$$\mathcal{O}(m)$$$ in the worst case. However, we can find out that the solution will pass even if we directly implement what is written in the description. In fact, if we implement the description itself, the time complexity is not $$$\mathcal{O}((n+m)Q)$$$, but in fact $$$\mathcal{O}(nm+Q)$$$. Why is it so?
Each time the lawn mower robot mows a cell of grass, $$$\mathcal{O}(1)$$$ time is used. Also, whenever the lawn mower robot terminates, $$$\mathcal{O}(1)$$$ time is wasted. Initially there are $$$nm$$$ cells with grass, therefore the time complexity from the former is $$$\mathcal{O}(nm)$$$. Each lawn mowing process finishes exactly once every query, so the time complexity from the latter is $$$\mathcal{O}(Q)$$$. In conclusion, the total time complexity is $$$\mathcal{O}(nm+Q)$$$. This method of time complexity analysis is called "amortized analysis".
Take some time thinking how the second condition affects the optimal solution.
If we only consider the first condition of the task, this is exactly the Traveling Salesman Problem (TSP) with Euclidean metric as edge lengths. How does the second condition affect the optimal solution?
In fact, the second condition does not affect the optimal solution at all. This is due to the fact that the optimal solution always satisfy the second condition. This can be proven by contradiction.
Let us assume that the optimal solution consists of some Hamiltonian circuit $$$A \to \cdots \to B \to C \to \cdots \to D \to A$$$, where $$$B \to C$$$ and $$$D \to A$$$ contains an intersection. Let us call this intersection $$$Q$$$. Then, $$$\overline{BD} < \overline{BQ}+\overline{DQ}$$$ and $$$\overline{AC} < \overline{AQ}+\overline{CQ}$$$ holds due to triangle inequality. Therefore, $$$\overline{BD}+\overline{AC}<\overline{BC}+\overline{AD}$$$ holds and $$$A \to \cdots \to B \to D \to \cdots \to C \to A$$$ has a smaller cost. This contradicts the assumption.
For the triangle inequality to not hold, either $$$\triangle BDQ$$$ or $$$\triangle ACQ$$$ must be degenerate (i.e. colinear) to change the inequality into an equality. However, by definition, $$$Q$$$ is both on $$$\overline{BC}$$$ and on $$$\overline{AD}$$$. Thus, if $$$\triangle BDQ$$$ is degenerate, three points $$$B$$$, $$$C$$$, $$$D$$$ are colinear, and if $$$\triangle ACQ$$$ is degenerate, three points $$$A$$$, $$$C$$$, $$$D$$$ are colinear. This contradicts the task's constraints.
Therefore, the optimal solution of TSP already satisfies the second condition. You may well ignore the second condition and compute the optimal solution of TSP.
We made the constraints very lenient, both $$$\mathcal{O}(n!)$$$ and $$$\mathcal{O}(n!\times n)$$$ should pass. Still, some solutions explicitly checking the line-line intersection (possibly, with some branch-and-bound in the backtracking) with time complexity $$$\mathcal{O}(n!\times n^2)$$$ can pass too.
Have you tried filling every second column? The intended solution doesn't deviate much from this.
The answer to existence of solution is always YES
. There are multiple constructions that satisfy the conditions, and only one is explained here.
WLOG assume $$$n \ge m$$$. (Of course it does not hurt if we don't assume this, but we get a cleaner picture if we do.) Then, let us call the area of the first two columns as $$$A$$$, and the rest as $$$B$$$. From $$$A$$$, we will find a subset $$$A'$$$ whose size is exactly half of $$$A$$$. From $$$B$$$, we will find a subset $$$B'$$$ whose size is at most half of $$$B$$$, so that $$$|A' \cup B'| \le \frac{nm}{2}$$$.
This can be constructed as follows. $$$A'$$$ is the set of all vertices on the second row. $$$B'$$$ is the set of vertices in $$$B$$$ that are on even-indexed columns. It is not hard to prove that $$$|A'| = \frac{|A|}{2}$$$ and $$$|B'| \le \frac{|B|}{2}$$$ both hold. It is also not hard to prove that $$$A' \cup B'$$$ is a connected dominating set. Thus $$$A' \cup B'$$$ is a set satisfying the conditions of the task.
Can you guess the solution with minimum cardinality of the set $$$D$$$? In fact it was answered in 2021 by the paper "Connected domination in grid graphs". The task idea did come from the paper, but well, I couldn't have implemented what's in the paper, and I don't expect that anyone could have done it in less than three hours.
The usual binary search does not work. Think smarter.
The solution looks a little similar to a greedy algorithm that decomposes a number into sum of multiple square numbers.
As the function $$$\sqrt{x-a}-b$$$ increases monotonically, it should look like we can binary search the answer. However, binary search with only $$$b$$$ can (and very often does) lead to $$$\mathcal{O}(x)$$$ error. Binary search with $$$a$$$ is impossible due to the possibility that $$$x-a<0$$$ may be true, and linear search with $$$a$$$ is definitely impossible.
Before explaining the solution, we will modify the formula a little. The inequality can be modified by the following method. As $$$x-a$$$ is always nonnegative, we do not need an absolute value sign even if we square both sides of $$$\sqrt{x-a} < b$$$. The inequality changes to the following.
Then, simply moving $$$b^2$$$ to the left hand side, the inequality changes to the following.
Using this, we can implement the comparison without any floating point operation. Now here is the solution.
Before and after the binary search, manage the interval $$$x$$$ can be in. Initially the interval is $$$[0,10^{18}]$$$. If the current size of the interval is $$$S$$$, we can find an interval of size no greater than $$$2\left\lfloor{\sqrt{S}}\right\rfloor$$$ using binary search. Set $$$a$$$ as the left end of this interval, and binary search again. Repeating this process, $$$S$$$ becomes $$$\mathcal{O}(1)$$$ in $$$\mathcal{O}(\log \log S)$$$ steps of binary search, and we can linear search after $$$S$$$ becomes $$$\mathcal{O}(1)$$$. How many questions will we use if we follow this method?
Each binary search uses $$$\left\lfloor{\log_2(\sqrt{S})}\right\rfloor=\left\lfloor{\log_2(S)/2}\right\rfloor$$$ questions, and after each binary search, the new value of $$$\log_2(S)$$$ becomes no greater than $$$1+\left\lfloor{\log_2(S)/2}\right\rfloor$$$. Of course, there are many values of $$$x$$$ where the value can be specified during the binary search, but the analysis becomes harder if we consider this, thus we will ignore this and assume $$$\log_2(S)$$$ always turns into $$$1+\left\lfloor{\log_2(S)/2}\right\rfloor$$$. Initially $$$0 \le x \le 10^{18}$$$, so let us set $$$\log_2(S)=60$$$. On the first step, $$$\left\lfloor{\log_2(S)/2}\right\rfloor=30$$$ questions are used, and $$$\log_2(S)$$$ changes to $$$31$$$. Repeating this, the number of questions until $$$S=2\left\lfloor{\sqrt{S}}\right\rfloor=4$$$ is $$$30+15+8+4+2+2=61$$$ in the worst case. If we linear search starting from this point, we can always solve the task with no greater than $$$70$$$ questions. The constraint is relaxed to $$$75$$$ questions maximum, to allow solutions which start the linear search early.
Of course, most $$$x$$$ are not the "worst case", and $$$10^{18}$$$ is a little far from $$$2^{60}$$$. Empirically we can find that this solution is hard to exceed $$$50$$$ questions, but I did not prove this formally.
H. Sequence and Not Difficult Queries
A lot of solutions exist. Not all of them have elegant implementations.
Think in terms of how many times the value changes. If we can count this, the answer translates directly to the answer we want.
There are various solutions of time complexity $$$\mathcal{O}((n+q) \log{n})$$$. It is not hard to see that this task can be solved with GNU PBDS or many other methods. The issue is, the solution using GNU PBDS requires an excessive amount of implementation. Managing intervals with a data structure is not as easy as one may think, and the constants are quite large as well. Also there is a solution which uses Segment Trees with a tuple of $$$(\text{left val},\text{right val},\text{count})$$$, but the general technique is quite overkill for this single task. Therefore, this editorial suggests a way to solve this task using the usual range sum data structures.
Do not count the number of important continuous equal intervals, but instead see the task in terms of how many times the value changes. It can be shown that the former is exactly $$$1$$$ greater than the latter. Now, we only have to find how many times the values changed using a data structure, and as the change only happens between adjacent elements, we can simply maintain that information in a data structure.
Now, manage a range sum segment tree or fenwick tree. formally, set the element as $$$0$$$ if adjacent elements are equal, and $$$1$$$ if they are different. Queries of type $$$1$$$ turn into changing at most $$$2$$$ elements in the data structure, and queries of type $$$2$$$ simply turn into range sum. As the form is what the usual Segment Tree or Fenwick Tree supports, the implementation is not so hard.
Think of a necessary condition for the cactus to be foldable. You can use just use that to proof by AC...... except the time complexity, of course.
For a cactus graph, the following condition is both necessary and sufficient for the graph to be foldable into 1 dimension.
- For each cycle, if the sum of lengths is $$$S$$$, a subset of edges with length sum $$$S/2$$$ can be found.
For this task, we do not have to necessarily prove this condition, we can just use it for "proof by AC". A proof will be given on the editorial for the hard version.
Now we must solve the Subset Sum problem for each cycle. Time complexity is the issue. The basic dynamic programming solution takes $$$\mathcal{O}(m^2l)$$$ time, thus it is too slow. Assuming there are $$$k$$$ edges with length $$$l$$$, we can divide $$$k$$$ into $$$\mathcal{O}(\log k)$$$ different lengths. Formally, to do this, there are many ways, but subtracting $$$1,2,4,\dots,2^i$$$ until $$$k$$$ is smaller than $$$2^i$$$ (and do not forget to include the leftover part of $$$k$$$!) is a popular choice. Now the number of elements is $$$\mathcal{O}(l \log m)$$$, and the time complexity is $$$\mathcal{O}(ml^2 \log m)$$$. The solution can be optimized to $$$\mathcal{O}(\frac{ml^2\log m}{w})$$$ using bitsets. When using bitsets in this task, a dynamic bitset may be needed. For C++, you may use std::tr2::dynamic_bitset<>
from GCC C++ or bitset<len>
with $$$len=2^k$$$, and for other languages you may use arbitrary precision integers as dynamic bitsets.
J. Mixed Integer Quadratic Programming
Change the cost function into a piecewise linear function. Then, you can decompose the edges into multiple edges. The proof might interest you, I suggest you to read it.
The new graph probably has some unfriendly features, such as multiple edges and negative cycles. Beware of these while implementing.
The constraints of this task is designed to be only solved when you have an elaborate understanding of the traits of convex functions and the traits of the original MCMF problem. the function $$$f(x)=ax^2+bx$$$ is convex downwards due to $$$a \ge 0$$$, and this is very important. Not only is it important that the problem is NP-Hard and thus unsolvable when $$$a<0$$$, but also this task can even be solved using an MCMF implementation as a blackbox, if you can manipulate the unique traits of convex functions.
First, we transform $$$f(x)=ax^2+bx$$$ into a function $$$g(x)$$$ where all (nondifferentiable) vertices are integer points. Precisely, $$$f(x)=g(x)$$$ holds when $$$x \in \mathbb{Z}$$$, and otherwise it is defined as $$$g(x)=f(\left\lfloor{x}\right\rfloor)+(f(\left\lceil{x}\right\rceil)-f(\left\lfloor{x}\right\rfloor))(x-\left\lfloor{x}\right\rfloor)$$$. The graph of the function $$$g(x)$$$ contains only integer points of $$$f(x)$$$ as its vertices, and the rest of the points are defined as a linear combination of the points before and after $$$x$$$. This is piecewise linear, and the gradient $$$f(x+1)-f(x)$$$ is $$$2ax+a+b$$$ which is weakly increasing. Therefore, the new function $$$g(x)$$$ is convex just like $$$f(x)$$$ is. For the task to be entirely solvable, we need one more observation.
The convexity of some function can be defined not only by their derivative, but also the fact that their epigraph (or hypograph) is a convex set. Also, for all convex sets $$$A$$$ and $$$B$$$, the Minkowski sum is also well defined, and the Minkowski sum is also a convex set. Conversely, we can decompose some convex set into the Minkowski sum of some two convex sets. This task is defined as finding two sets $$$A$$$ and $$$B$$$ where $$$P=A+B$$$ holds when a convex set $$$P$$$ is given.
Let us apply this observation to the new function $$$g(x)$$$. The convexity of the function $$$g(x)$$$ is defined by that the epigraph of $$$g(x)$$$ is a convex set. Now, in the modified problem, the set of interest, the intersection of the epigraph of $$$g(x)$$$ and $$$[0,c] \times \mathbb{R}$$$, is also convex because it is the intersection of two convex sets. Can this convex set be represented as the Minkowski sum of multiple simple convex sets? This is in fact possible. For each integer $$$x$$$ in the interval $$$[0,c)$$$, define a line segment $$$L_x$$$ connecting $$$(0,0)$$$ and $$$(1,f(x+1)-f(x))$$$. Then, define $$$S_x$$$ as the epigraph of the segment $$$L_x$$$. Then, $$$S_0+S_1+S_2+\cdots+S_{c-1}$$$ is equal to the set $$$C$$$.
Now we return to the original MCMF task. Some edge has cost $$$f(x)=ax^2+bx$$$, and the capacity is $$$c$$$ units. But then, the amount of flow is an integer, thus the cost can be represented as $$$g(x)$$$ also. When $$$a=0$$$, we leave the edge as is, and if $$$a \neq 0$$$, divide the edge to $$$c$$$ duplicate edges. For the divided edges $$$E_0,E_1,E_2, \cdots, E_{c-1}$$$, the cost of $$$E_x$$$ is $$$f(x+1)-f(x)$$$, and the capacity is all $$$1$$$. If we send $$$x$$$ units of resource through these $$$c$$$ edges, the minimum cost is $$$g(x)$$$. This can be proven by the observation above, or one may prove a greedy approach themselves.
As the flow integrality theorem holds for MCMF, there is an optimal solution where all variables are integers, and such an optimal solution satisfies the condition of this task. Therefore, if we run MCMF on the new modified graph stated above, the task will be solved. However, you must consider whether the MCMF implementation works on the modified graph also. Do note that the MCMF implementation must allow duplicate edges and negative cycles to solve this task.
Bonus: Still, I did not prove that this problem is in P. For this problem to be in P, there must be an algorithm with time complexity polynomial to the input size, but the input size of $$$c$$$ is $$$\mathcal{O}(\log c)$$$. However, the task's solution is polynomial in $$$c$$$ but exponential in $$$\log c$$$, thus it does not prove whether this problem is in P or not.
We gave you a condition in the editorial of the easy version. Prove it. Also go read the blog (link), it contains the algorithm you probably should be using in this task.
In the editorial of the easy version, we gave you this condition as a sufficient and necessary condition for the cactus being foldable.
- For each cycle, if the sum of lengths is $$$S$$$, a subset of edges with length sum $$$S/2$$$ can be found.
We will now prove it for a solution to the hard version.
First, we will prove that it is sufficient. For the edges in the set $$$A$$$ with sum $$$S/2$$$, assign $$$1$$$ to the edge. For the rest, assign $$$-1$$$ to the edge. Then, run a DFS starting from an arbitrary vertex. Let the coordinate of the current vertex be $$$x_v$$$. If we pass an edge with $$$1$$$ assigned, set $$$x_u=x_v+l$$$. Otherwise, set $$$x_u=x_v-l$$$. Then, $$$|x_u-x_v|=l$$$ will hold for every edge including back edges. This is because DFS will go around each cycle always in one direction, never the other direction.
To prove that this condition is necessary is not too hard. Just take one cycle, run DFS on that cycle, and include the set of edges that goes toward the negative direction in the set $$$A$$$. Now this is a solution to the partition problem itself.
Now the issue is, the $$$\mathcal{O}(\frac{ml^2\log m}{w})$$$ solution of the easy version is not quite scalable to larger values of $$$l$$$. Luckily, there is an $$$\mathcal{O}(nc)$$$ algorithm for Subset Sum in the paper "Linear Time Algorithms for Knapsack Problems with Bounded Weights", and you can track the solution as well using the algorithm. It uses a concept of "balancing", which is hard to explain, and I would rather suggest to read the blog linked on the hints. Using that algorithm, the time complexity lowers to $$$\mathcal{O}(ml)$$$.
Now, if you solved the Subset Sum problem for each cycle and tracked the solutions, adapt the sufficiency proof directly to a solution. Assign $$$\pm 1$$$ to each edge, and run DFS on it just as explained. Then a valid assignment of $$$x_i$$$ is found. If you start the DFS with $$$x_i=0$$$, the absolute values of $$$x_i$$$ will never exceed $$$m\max(l)$$$, which is $$$10^5 \times 500$$$. This is obviously smaller than $$$10^9$$$.