Thanks for participation!
If your solution to D involves any data structures and is not $$$O(n)$$$ -- please read the "solution 1". I believe it is very interesting, but to make the difficulty suitable for D we allowed not $$$O(n)$$$ solutions.
How many operations will we perform?
At most one. Why?
Suppose we can only perform exactly one operation. In this case the answer is $$$S=\max_{1\le i\le n}(a_i\mathrm{\ or\ }z)$$$. In fact, we can prove that this is the answer.
Define $$$a_i'$$$ as the value of $$$a_i$$$ after some operations.
It suffices to prove the answer will never exceed $$$S$$$. Note that $$$z$$$ will always become a submask of itself after any number of operations, so $$$a_i$$$ will always be a submask of $$$(a_i\mathrm{\ or\ }z)$$$ after any number of operations. This leads to the conclusion that $$$a_i'\le (a_i\mathrm{\ or\ }z)$$$ for all $$$i$$$. Thus $$$\max_{1\le i\le n} a_i'\le \max_{1\le i\le n}(a_i\mathrm{\ or\ }z)=S$$$.
Time complexity is $$$O(n)$$$.
1696B - NIT Destroys the Universe
How many operations will we perform?
At most two. Why?
How to check if the array can be destroyed in $$$0$$$ or $$$1$$$ operations?
The answer is at most $$$2$$$, because doing the operation on $$$[1,n]$$$ at most twice will always work. (If the array contains at least one zero, we need $$$2$$$ operations. Otherwise we need $$$1$$$ operation.)
If the array consists of zeroes, the answer is $$$0$$$.
If all non-zero elements form a contiguous segment in the array, the answer is $1$. To check this, you can find the leftmost and rightmost occurrence of non-zero elements and check if elements in the middle of them are also non-zero.
Otherwise the answer is $$$2$$$.
Time complexity is $$$O(n)$$$.
1696C - Fishingprince Plays With Array
The operation is reversible. (The two operations are reverses of each other.)
Try to find a middle state, such that we can turn both $$$a$$$ and $$$b$$$ into it.
Call the first operation "expand" and the second operation "shrink".
Keep doing expand on both arrays until we can't do expand anymore, call the resulting arrays $$$a'$$$ and $$$b'$$$. It suffices to check if $$$a'=b'$$$. To implement this, you need to compress contiguous equal numbers.
Proof of why this is necessary and sufficient:
Sufficiency is obvious, since the operations are reversible. We can do something like $$$a\to a'=b'\to b$$$.
Necessity: Let $$$f(a)=a'$$$. It suffices to prove that an operation on $$$a$$$ does not affect $$$f(a)$$$. An expand operation obviously doesn't affect $$$f(a)$$$. A shrink operation shrinks $$$a[i,i+m-1]$$$ into one element. When computing $$$f(a')$$$, we will always expand $$$a'_i$$$ at some time, so the result is the same as $f(a)$.
Time complexity is $$$O((n+k)\log_m V)$$$, where $$$V=\max a_i$$$.
This problem has two different solutions. The first one is more beautiful, but less straight-forward.
The solution is $$$O(n)$$$. We don't need any data structures.
Instead of trying to construct the shortest path from $$$1$$$ to $$$n$$$, find a "transfer vertex" that we must pass through.
We will always pass through the position of the maximum element in the array.
Suppose the maximum element is $$$a_k$$$. Solve recursively for $$$dis(1,k)$$$ and $$$dis(k,n)$$$.
Denote $$$dis(x,y)$$$ as the length of the shortest path between $$$x$$$ and $$$y$$$.
Consider a position $$$i$$$ that $$$a_i=n$$$. Assume $$$i\ne 1$$$ and $$$i\ne n$$$. For a segment that passes $$$i$$$, its maximum element is always $$$a_i$$$. Thus, for $$$x<i<y$$$, $$$x$$$ and $$$y$$$ will never be directly connected by an edge. This means that when going from $$$1$$$ to $$$n$$$, we have to pass $$$i$$$. Let us solve recursively for $$$dis(1,i)$$$ and $$$dis(i,n)$$$. For example, we solve for $$$dis(1,i)$$$.
We already know that $$$a_i=n$$$, so $$$i$$$ is the maximum element in $$$[1,i]$$$. Consider the minimum element in $$$[1,i]$$$, suppose it is $$$a_j\ (j<i)$$$. From similar arguments, we can solve recursively for $$$dis(1,j)$$$ and $$$dis(j,i)$$$. However, note that $$$dis(j,i)$$$ equals to $$$1$$$: since $$$j$$$ and $$$i$$$ are respectively minimum and maximum in $$$[1,i]$$$, they have to be minimum and maximum in $$$[j,i]$$$ as well. So $$$i,j$$$ must be directly connected. Thus, we only need to solve recursively for $$$dis(1,j)$$$.
The process with $$$dis(i,n)$$$ is similar. Note that we will only call $$$dis(l,r)$$$ for $$$l=1$$$ or $$$r=n$$$ (if not, the return value is always 1), so it suffices to pre-compute prefix and suffix minimums and maximums.
The time complexity is $$$O(n)$$$.
Look at the last sample test case. Think of a simple greedy.
Keep going to the rightmost vertex (the vertex with the largest id) works.
Use data structures to simulate the process. How?
We can prove that keep going to the vertex with the largest index is a correct strategy. The proof is left as an exercise :) Hint: try to prove that the path we will visit is the same as the path we visited in solution 1.
Suppose we are at $$$i$$$. We want to find the largest $$$j>i$$$ such that $$$i$$$ and $$$j$$$ are directly connected. WLOG, assume $$$a_{i+1}<a_i$$$. Then, it cannot be the case that $$$a_j>a_i$$$, since none of $$$a_i,a_j$$$ will be $$$mn(i,j)$$$. Thus $$$a_j<a_i$$$. It follows that all $$$i<k<j$$$ satisfies $$$a_k<a_i$$$, otherwise none of $$$a_i,a_j$$$ will be $$$mx(i,j)$$$.
Let $$$r_i$$$ be the largest $$$p$$$, such that for all $$$t\in [i+1,p]$$$, $$$a_t<a_i$$$. From the arguments above we know $$$j\in [i+1,r_i]$$$. $$$r_i$$$ can be pre-computed with a stack, or binary search + some data structures.
Let $$$j_0$$$ be the position of the minimum element in $$$[i+1,r_i]$$$. Obviously $$$j_0$$$ is directly connected with $$$i$$$. For any $$$j_0<k\le r_i$$$, $$$mn(i,k)$$$ will be $$$a_{j_0}$$$, showing that all such $$$k$$$ is not directly connected with $$$i$$$. Thus, $$$j_0$$$ is the desired $$$j$$$.
If we use data structures for range minimum, we get a $$$O(n\log n)$$$ solution, which can easily pass (not sure whether $$$O(n\log^2 n)$$$ ones will pass though, the large constraints were intended to cut those).
However, by building the cartesian tree of the array and doing proper pre-computations, we can optimize this solution to $$$O(n)$$$.
Try to find out the number of operations we do on a specific cell $$$(i,j)$$$, call it $$$f(i,j)$$$.
Write the recurrence formula for $$$f(i,j)$$$. What is $$$f(i,j)$$$?
$$$f(i,j)=\binom{i+j}j$$$
The answer is the sum of $$$f(i,j)$$$ over all white cells $$$(i,j)$$$. Use some combinatorics formula to speed it up.
Let us find out the number of operations we do on a specific cell $$$(i,j)$$$, call it $$$f(i,j)$$$.
Every operation done on $$$(i-1,j)$$$ will lead to one doll on $$$(i,j)$$$, thus consuming one operation on $$$(i,j)$$$. Similar observation holds for $$$(i,j-1)$$$.
Thus, $$$f(i,j)=f(i,j-1)+f(i-1,j)$$$ (given that $$$(i,j),(i-1,j),(i,j-1)$$$ are all white cells). Note that $$$a$$$ is non-increasing: this means that if $$$(i,j)$$$ is white, $$$(i-1,j),(i,j-1)$$$ will both be white. So we can conclude that $$$f(i,j)=f(i,j-1)+f(i-1,j)$$$ always holds as long as $$$(i,j)$$$ is white.
Another way to see the formula is $$$f(i,j)$$$ is the number of ways to go from $$$(0,0)$$$ to $$$(i,j)$$$, only going down or right by 1 step. This implies that $$$f(i,j)=\binom{i+j}j$$$.
From this, we know that the answer is $$$\sum_{i=0}^n\sum_{j=0}^{a_i-1} \binom{i+j}{i}$$$. With the equation $$$\sum_{i=0}^k\binom{n+i}n=\binom{n+k+1}{n+1}$$$, we know that the answer is $$$\sum_{i=0}^n\binom{i+a_i}{i+1}$$$.
The time complexity is $$$O(n+V)$$$, where $$$V=\max a_i$$$.
The solution does not contain painful casework and deadly implemention.
Suppose we aleady know edge $$$(i,j)$$$ exists in the tree. What can we know from it?
We can immediately recover the whole tree.
Read the hints first to understand the solution better.
Construct a graph with $$$\binom n2$$$ vertices $$$(1,2),(1,3),\dots,(n-1,n)$$$.
If $$$dis(a,b)=dis(b,c)$$$, link an undirected edge between $$$(a,b)$$$ and $$$(b,c)$$$.
Observe that all edges in the tree form a connected component of size exactly $$$n-1$$$ in the graph!
Find all components of size $$$n-1$$$ and try if all vertices in it form a tree that satisfy the input. There are at most $$$\dfrac n2$$$ such components, so complexity is $$$O(n^4)$$$. Proper pre-computation and the usage of bitsets can reduce the complexity to $$$O(n^4/w)$$$.
1696G - Fishingprince Plays With Array Again
What kind of problem is this problem?
Linear programming.
Consider the dual.
Consider the case when $$$n=2$$$. Draw the linear programming on a xOy-coordinate. Try to observe what the answer might be.
First we solve the problem with only 1 query on the whole array $$$A$$$.
This is a linear programming problem:
Consider its dual:
Suppose $$$X\le Y$$$. Now we will prove that there exists an optimal solution to the dual problem, in which $$$x_i$$$ can only take three values: $$$1/Y,1/(X+Y),0$$$.
The proof is as follows: It is well-known that an optimal solution to a linear programming problem must lie on a vertex of the "multi-dimensional convex polygon" which the restrictions surround. Thus we are only interested in $$$x_i$$$ that satisfy several "=" restrictions (and the restrictions should really intersect at one point, meaning that those "=" should uniquely determine $$$x$$$). Consider any "sufficient" (that means they uniquely determine $$${x}$$$) subset of them. If one restriction is related to $$$x_p,x_q$$$, we link an undirected edge between $$$p$$$ and $$$q$$$. If one restriction is only related to $$$x_p$$$ (i.e. $$$x_p=0$$$), we link a self-loop on $$$p$$$. "Being sufficient" means that all connected components in the graph has exactly one cycle. However, for an edge $$$(u,v)$$$, we know that either $$$u=v+1$$$ or $$$u=v$$$. This means that all cycles can only be $$$(i\to i+1\to i)$$$ or $$$i\to i$$$. If a cycle is $$$(i\to i+1\to i)$$$, all $$$x_i$$$ in the component are $$$1/(X+Y)$$$; If a cycle is $$$i\to i$$$, all $$$x_i$$$ in the component are $$$1/Y$$$ or $$$0$$$ (not $$$1/X$$$, because it exceeds the constraints).
Thus we can use dp to solve the dual problem. Let $$$dp(i,0/1/2)$$$ be the maximum $$$\sum_{j\le i}A_jx_j$$$ when $$$x_i$$$ is the $$$0/1/2$$$-th candidate above. Transitions are straight-forward.
For multiple queries, the transitions can be written into multiplying matrices, and we can use segment tree to maintain updates.
About precision issues: actually we can avoid using floating point numbers completely. Note that all numbers in this problem are fractions with denominator $$$Y(X+Y)$$$. Also note that the answer does not exceed $$$(\sum a_i)/Y$$$. This means that the numerator does not exceed $$$(\sum a_i)\times (X+Y)<10^{18}$$$, so we can use long long-s to only store numerators. If you use double in C++, the relative error of one operation is less than $$$10^{-15}$$$. $$$10^{-15}\times n<10^{-9}$$$, which means that using doubles is also enough.
Complexity: $$$O(n+q\log n)$$$.
Find a way to calculate the maximum product that can be turned into counting.
Use monotonicity to reduce the complexity.
First, we describe a strategy to find the answer for a single subset. If the whole subset is negative, the answer is the product of the $$$K$$$ maximum numbers in it. Otherwise, take $$$K$$$ numbers with the maximum absolute value. If there is an even number of negative numbers in those numbers, we're done. Otherwise, find the smallest positive element and change it to the absolute-value-maximum negative element unchosen, or find the largest negative element and change it to the maximum positive element unchosen. We either do the first change or do the second change.
This gives a direct dp solution. Take all $$$a_i$$$ and sort them into two arrays consisting of positive and negative ones (0 goes to arbitary one), $$$pos$$$ and $$$neg$$$, sorted by absolute values. By calculating $$$fpos(i,k)$$$: the sum of the product of the largest $$$K$$$ numbers of all subsets of $$$pos[1\dots i]$$$ that contain $$$pos_i$$$, and $$$fneg(i,k)$$$ similarly, and $$$gneg(i,k)$$$, the sum of the product of the absolute value smallest $$$K$$$ numbers of all subsets of $$$neg[i\dots |neg|]$$$ that contain $$$neg_i$$$, and enumerating the following, we can solve the original problem in $$$O(n^5)$$$:
- the number of positive elements in the $$$K$$$ numbers with the maximum absolute value in our calculating subset, $$$p$$$.
- the smallest positive element in the $$$K$$$ numbers with the maximum absolute value, $$$pos_i$$$.
- the greatest negative element in the $$$K$$$ numbers with the maximum absolute value, $$$neg_j$$$.
- (if $$$p$$$ is odd) the greatest positive element not in the $$$K$$$ numbers with the maximum absolute value, $$$pos_k$$$.
- (if $$$p$$$ is odd) the smallest negative element not in the $$$K$$$ numbers with the maximum absolute value, $$$pos_l$$$.
The contribution to the answer can be represented as the sum of product of $$$fpos,fneg$$$, and powers of two. I left the details as an exercise.
However, notice that the "enumerating $$$k,l$$$" part has nothing to do with $$$p$$$, so we can pre-calculate the contribution of $$$k,l$$$ for every pair $$$(i,j)$$$, giving an $$$O(n^4)$$$ algorithm.
What's more, for fixed $$$i,j,k$$$, the $$$l$$$-s that we do the first change is a prefix/suffix of all $$$l$$$, and $$$l$$$-s that we do the second change is a prefix/suffix of all $$$l$$$. So with two pointers and prefix sums, we can pre-calculate the contribution of every $$$(i,j)$$$ in $$$O(n^3)$$$, which is fast enough.
You might be afraid that $$$600^3$$$ is too slow for 1.5 seconds. However, the two $$$O(n^3)$$$ parts of the algorithm actually run in $$$O(n\times cntpos\times cntneg)$$$ ($$$cntpos,cntneg$$$ are the number of positive/negative integers in the array), leading to an at most $$$1/4$$$ constant factor. Thus, the algorithm actually runs very fast (less than 0.25 seconds). However for similar constant factor reasons, the $$$O(n^4)$$$ solution only takes about 6.5 seconds on Codeforces, so we had to set a seemingly-tight limit.
Thank you for fast editorial!
fast editorial, good problems
great round!
What a speed!
Editorials with hints. Appreciable efforts..Thanks!
I hope I'm not the only one who got WA on question B for not realizing that the answer is at most 2... XD
got 4 times wa then realised .
Lol Same...
Best way to present editorial is with hints Thanks for fast and crisp editorial
Thank you for the fast editorial feecIe6418 Congrats on your first Global Round. It's hard to imagine that you single-handedly created all except for one problems for this round, that's amazing! Great work!
I really liked the problems. Personally I felt that the problem statements were not very lengthy (which is good) and not too hard to understand for low-rated players.
Honestly, I only checked out the first three problems, but I found them really interesting. I was able to work out the solutions for the first two, sample test cases were also good in my opinion, no problems with that.
Unfortunately due to a power cut in my locality, I didn't have internet access for the first hour of the contest and this decided not to participate. Looking forward to upsolving in virtual participation.
Also, many congratulations once again on your first Global Round. It's hard to imagine the amount of work that goes into making these problem statements and testing them. Kudos to the team and to all testers and problem setters on the platform!!
Thanks for the fast editorial. By the way,E<D.
Feeling sorry for Tourist. Dude is facing a lot of negative deltas in recent times. I remember reading a blog few days back, when someone predicted Tourist's fall. It's becoming real now :(
Just out of curiosity, could you link that blogpost? I'd like to read it too
Brother Legends never dies Tourist is Legendary No one takes his place
There is nothing called legendary. If you consistently put in the effort for over a decade(He has been into mathematics and programming since age 10), you could become like him(may be better than him).
And you're still newbie after 3 years.
A Lion takes its one step back before making its bigger attack
Normal competition when everyone is too good.
Share the blog . I want to see what happen to our king !!.
i Dont think its a downfall if you get -56 for global rank 7.
Thank you for Editrorial!
Solution for E without using the identity given in the editorial: Let $$$D_i$$$ be the $$$i$$$'th diagonal, the set of $$$(x, y)$$$ such that $$$x+y=i$$$. Calculate the answer by summing each diagonal. When moving from $$$D_i$$$ to $$$D_{i+1}$$$, the current number of moves is simply multiplied by $$$2$$$, except in the case where some cell in $$$D_i$$$ is a "boundary" (the cell down or to the right is black). But there are only $$$O(n)$$$ such boundary cells, so we can simply subtract their contribution as we go along.
Plz show codes for the questions
Are you sure memory limit will not exceed if we expand them? If not why did it happen to me?
You shouldn't make the whole array just you can store the occurences
can you pls explain how to store the occurrences such that they appear in the correct order as well?
I did it just storing a pair of values: the factors and its number of ocurrences. For example, for M = 3 and A = {6 3 4} -> {[2,3] [1,3] [4,1]} Did it for both vectors and compare them
161770389 You can check here. I've just stored it as pair like if A={4,2,6} and M=2 then it will be {{2,3},{3,2}}.
This is how I rewrote my solution after hitting the memory limit.
https://codeforces.me/contest/1696/submission/161775684
The problem F is great !
Firstly, notice that if you know a single edge of the tree, you can figure out all the edges. (If you know an edge u,v first all w such that dist[u][v] = dist[u][w] and add edge u,w. Then continue this process). In the end, you can verify the existence in O(n^3). Now, realize that out of (1,2), (1,3) ... (1,n), one of the edges must be present. So, try all of these n-1 possible edges and check if you get a valid tree.
D was really great, I have solved it with stack, segment tree, and bfs :).
Was it really a good idea to give n+1 number in array for task E ? I didn't solve it only because of this. The tasks mustn't be with such a potential trap I think.
lmao came here to comment this
D is really cool, I solved it with mono stack and binary search
I cant understand the second approach given in editorial . Need help
could someone please find the mistake in my submission for C?
Int overflow
thanks for the editrorial!
pls give some proofs for math formulas you are using in E solution otherwise it doesnt tutorial, because your solution based only on this math fact so you have to prove it
It's just hockey stick identity. Nothing more, nothing less. I don't think one has to prove a well-known, simple identity in combinatorics in their editorial
I'll leave a link to the solution of last problem from last Starters since it also uses this identity and gives a 'proof' based on another (maybe more well-known) identity.
For row i : the summation is iCi + (i+1)Ci + (i+2)Ci ... + (i+v[i]-1)Ci.
Now if we know iCi = 1 = (i+1)C(i+1)
Replacing this in the above formula we get the summation for row i as :
(i+1)C(i+1) + (i+1)Ci + (i+2)Ci ... + (i+v[i]-1)Ci.
Combine the first 2 terms : (i+1)C(i+1) + (i+1)Ci = (i+2)C(i+1) [ Using the simple property nCr = (n-1)C(r-1) + (n-1)Cr ]
Replacing this in the equation we get :
(i+2)C(i+1) + (i+2)Ci ... + (i+v[i]-1)Ci
Again now, we can combine the next 2 terms and so on. So finally we get (i+v[i])C(i+1)
Thinking on how to come up with a visual approach, I have found the following one. It does nothing different than what 123thunderbuddie did, but supports the idea visually.
We want to find the sum of the number of ways to reach every cell in the $$$i^{th}$$$ row. Assuming there are $$$c$$$ cells, the sum is equal to the following (just what 123thunderbuddie wrote):
Suppose there is a row below that has the same number of cells $$$c$$$ (not the $$$(i+1)^{th}$$$ row but a temporary one). This sum is actually equal to number of ways to reach the last cell in that row, which is $$$\binom{i+c}{i+1}$$$ (number of ways to reach cell $$$(i+1, c-1)$$$ from $$$(0, 0)$$$). Here is a visualization:
Why is this submission with long long giving WA, but this isnt.
The variable is z_cnt is not initialized in both of them resulting in UB. You got lucky with the second solution.
Thanks :)
Kudos to the authors for an amazing round. Finally reached Purple!
I've noticed that in G there are no small tests before the first big one, maybe that's the case in other problems too, keep in mind that it's bad.
I am not sure of the time complexity of this solution for D can someone help out https://codeforces.me/contest/1696/submission/161799451
I feel it should be $$$O(n)$$$ because any next greater or next smaller chain is travelled once.
We can do D in O(N). By maintaining premax,premin,sufmax,sufmin.
thanks for the code
Doesn't this code will add multiple edges between two nodes? Though it won't affect our result.
I solved problem D using greedy approach :) 161785869 here it is
Can someone explain C please
Just maintain vector of pairs where pairs.first = actual value and pair.second = frequency . Make vector of both a and b . And check at last a == b or not. Since we can divide every no m times even in the worst case it's log(n), which means atmost 30times for each element. which is still O(N). Code--
Another solution for F: Let $$$S(u)$$$ be a set of pairs $$$(x, y)$$$ such that $$$d(x, u) = d(y, u)$$$. Notice that $$$u$$$ with minimal number of elements in $$$S(u)$$$ is a leaf and the vertex connected to it is a vertex $$$v$$$ such that $$$S(v)$$$ contains $$$S(u)$$$ and for every pair $$$(x, y)$$$ such that $$$(x, y)$$$ belongs to $$$S(v)$$$ and doesn't belong to $$$S(u)$$$ either $$$x = u$$$ or $$$y = u$$$. We can remove that leaf and continue the process.
Do you know how to prove this? It does make sense intuitively.
Is it just me or are the codes not attached?
Hope I'm not the only one who overcomplicated D with sparse tables, sets and binary searches.
I was dazzled when I found the "more beautiful but less straightforward" solution to problem D. It was perfectly splendid! orz
Any other guy who got wrong answer on problem B untill realising answer can be atmost 2 !!
"EDITRORIAL" of Codeforces Global Round 21. Lol :D
I have another solution for F. It was very painful to implement, though, so I couldn't finish it during the contest. Maybe some of you had a similar thought.
First of all, if x and y are even distance apart, then there must be a valid z that's equidistant from them (center of their path). Also, if x and y are an odd distance apart, then no valid z will occur, clearly. Therefore we can create a graph where there is an edge between x and y if there is a valid such z. This graph must be bipartite, so we will be able to split vertices into 2 categories.
Now we will solve for the adjacent vertices of every vertex. After this, it will be pretty easy to construct a solution (and check if it is false) as we can just add in all the apparent edges. Consider the vertex v. Now, for all other vertices, we can form equivalence groups by their distance to v. Then, for each of these groups, we can check if every 2 vertices of that group are equidistant. If so, then clearly every one of these vertices is in a different component of the "graph without v". Now the size of the largest of those groups must be the degree of v, as each outgoing edge of v signifies one connected component on the "graph without v". Of course, we still are not able to pinpoint which one of these groups contains the vertices distance 1 around v. Let's say there are m such groups (groups where every 2 vertices are equidistant from each other and every vertex is equidistant from v with the largest possible size). Then they must be "circles" around v from distance 1 to m. We are trying to find the group of distance 1. This is where the bipartite property comes in: consider any point p from a group k distance away from v. If p and v are in different bipartite categories, then they are at an odd distance away from each other. Otherwise, they are even distanced away, and there will be exactly one single vertex z that is equidistant from them both, which will be in the middle of the path between p and v. This z will clearly be in the equivalence group with distance k/2 away from v. let's call this operation (obtaining a new vertex with k/2 distance) splitting. Now consider the equivalence group that splits the most. It is clear that if $$$k=2^ba$$$( a odd), then b will be the times it splits. Then, it is obvious that the largest power of 2 less than or equal to k will satisfy this property. The important part is that a vertex with a power-of-2 distance will eventually split into a vertex of distance 1 from v, and so we can obtain all the other vertices of distance 1 around v, and we are done.
I believe this method as described takes $$$O(n^3\log{n})$$$ time, and it could be simplified to $$$O(n^3)$$$
I came up with this solution too, and it unfortunately is incorrect (or perhaps incomplete).
This statement is false (I had based my solution around this assumption too).
Consider the following tree (I have highlighted the maximum size levels of vertices which have all their pairwise lca's at root (which you call "circles") with red):
So we can easily construct trees in which circles are not at one contiguous range of distance, but rather present at disjoint ranges of distance (and this totally breaks our ability to find the circle at distance 1 by halving the number of valid of levels at each step).
Implementation of this approach: link
Here is my solution to problem D:
I have a recursive function that takes two-parameter $$$(l, r)$$$ positions of the starting and ending index in an array and calculates the optimal answer for this range.
For the optimal answer, I first find the index of the minimum element and maximum element in the range $$$(l, r)$$$ and store them in $$$s$$$, $$$e$$$. $$$(s < e)$$$. I am using a segment tree and a map to find the index of the minimum and maximum in the range $$$(l, r)$$$. It is optimal to have an edge between $$$(s, e)$$$, so we add one edge.
So, The answer for the range $$$(l, r)$$$ is $$$1$$$ + the answer for $$$(l, s)$$$ + the answer for $$$(e, r)$$$, and the base case is that if $$$l$$$ == $$$r$$$, there is no need to add an edge, so return $$$0$$$.
https://codeforces.me/contest/1696/submission/161807339
Here is another solution for F:
Assume that for some two nodes $$$u$$$ and $$$v$$$, we know there exist an edge between them in the solution.
From this, we can find all vertices $$$w$$$ such that $$$dist(u, v) = dist(v, w)$$$, then the solution must also contain the edge $$$(v, w)$$$.
Repeating that process, we can find the solution (if exists) in a bfs-like way.
Therefore, for each $$$x$$$ from $$$2$$$ to $$$n$$$ we can suppose that the edge $$$(1, x)$$$ exists in the solution, try to find that solution, and see if it is valid.
Great round. I especially liked that nothing between A and E is too implementation-heavy.
Damn my $$$O(n \log^2n)$$$ solution for D managed to pass. :)
Minor issue: blog title has a typo (Editorial not editrorial)
feecIe6418
There r quite a lot of typos .
Here is $$$O(n^3)$$$ solution to F. First we can find the farthest distance from any vertex $$$x$$$ to other. So we know the diameter of the tree. If we root the tree at the center, we can know the depth of every vertex. So we can determine at least one edge in the tree (node with depth 1 and the root), then use the method in the editorial. It also shows there is at most one valid answer.
That's quick.
I think the contest has too much observation.
As is literally every codeforces round including div 3, div.2, educational, div1+div2 combined, div1, olympiad-based rounds?
(yes I know that div.4 exists but I don't have a very fond experience in div.4)
a bit of pity for D
pass pretest make me think it works anyway
but didn't come up with cases like 1 100000 2 99999 3 99998 4 99997 ...
then got fst
try to consider more next time. or maybe always need to try to achieve as low complexity as i can
it's actually a great problem though, and solution 1 is awesome.
why so many downvotes
For D, what if the maximum element is 1 or n?
Could someone please help me to find me the error in my code of problem D I am using approach 2 https://codeforces.me/contest/1696/submission/161834104
Take a look at Ticket 14387 from CF Stress for a counter example.
Ohh yes Got it Thanks!!
In problem G, when I set
It got TLE on test 76 (or Pretest Passed 5880ms, and TLE on test 7 if I unroll the loops in matrix multiplication myself) with GNU C++14 (32bit), but ~850ms with GNU C++20 (64bit, winlib).
However, when I set
Its speed remains unchanged with GNU C++20 (64bit, winlib), but when compiled by GNU C++14 (32bit), it works in <500ms and becomes the fastest submission.
Is there any logic behind this (about
inf
indouble
)?An explanation for the first (but not very informative) is that diving by zero (even with floating numbers) is undefined behavior, (according to [expr.mul.4], or discussions in StackOverflow), and anything can happen with undefined behavior.
a better suggestion might be using
std::numeric_limits<double>::infinity()
as the value for inf. this is not undefined behaviour and still compares bigger than the maximum value representable withdouble
.If you are/were getting a WA/RE verdict on problems from this contest, you can get the smallest possible counter example for your submission on cfstress.com. To do that, click on the relevant problem's link below, add your submission ID, and edit the table (or edit compressed parameters) to increase/decrease the constraints.
If you are not able to find a counter example even after changing the parameters, reply to this thread (only till the next 7 days), with links to your submission and ticket(s).
can anyone please explain the proof for problem c with intution by taking an example? i couldnt understand editorial
Try expanding $$$a$$$ and $$$b$$$ into $$$a^\prime$$$ and $$$b^\prime$$$ in the example test cases.
i have expanded both the arrays upto maximum extent as possible but im not getting proof/intution.
Now notice that it is possible to obtain $$$a^\prime$$$ from $$$a$$$ after some operations, and it is possible to obtain $$$b$$$ from $$$b^\prime$$$ after some operations, so if $$$a^\prime = b^\prime$$$, we can obtain $$$b$$$ from $$$a$$$.
We still need to prove that if $$$a^\prime \ne b^\prime$$$, it is impossible to obtain $$$b$$$ from $$$a$$$ after any number of operations.
To do so, note that the fully expanded form of $$$a$$$ does not change when you perform one of the operations on it.
shit contest. downvoted
What does it mean by "z will always become a submask of itself after any number of operations, so ai will always be a submask of (ai or z) after any number of operations"? Please explain.
Basically, $$$z$$$ will only lose bits.
So $$$(a_i\mathrm{\ or\ } z^\prime) \le (a_i\mathrm{\ or\ } z)$$$, where $$$z^\prime$$$ is $$$z$$$ after some operations.
Okay, but what is the meaning of submask here specifically. Is it like changing the set bits of z after any number of operations so the resulting bit representation i.e. bit mask obtained is called submask?
$$$z^\prime$$$ is a submask of the bitmask $$$z$$$, since every bit that is set in $$$z^\prime$$$ is also set in $$$z$$$, but not necessarily vice versa.
It means new bitmask obtained after changing only the set bits of a bitmask is known as submask of that bitmask. Am I right?
Sounds right
Why this code is working fine for question A? for(int i = 0; i < a.size(); i++){ cin >> a[i]; a[i] = a[i] | z; z = a[i] & z; ans = max(ans, a[i]); } Here I am performing both the operations — bitwise or and bitwise and. Also why it is not needed to perform second operation i.e bitwise as given in tutorial, any proof?
You are doing:
Every bit in z that is 1 is also 1 in x, so (x & z) equals z. In other words, you aren't changing the value of z and the second operation is doing nothing.
Have a solution different from the tutorial for problem D which does not use segment tree or stacks, very difficult for me to explain how it works but the following ideas might help-
$$$1.$$$ $$$Its$$$ $$$always$$$ $$$optimal$$$ $$$to$$$ $$$jump$$$ $$$to$$$ $$$the$$$ $$$farthest$$$ $$$index$$$ $$$from$$$ $$$current$$$ $$$index.$$$
$$$2.$$$ $$$We$$$ $$$always$$$ $$$jump$$$ $$$from$$$ $$$a$$$ $$$local$$$ $$$maxima$$$ $$$to$$$ $$$a$$$ $$$local$$$ $$$minima,$$$ $$$and$$$ $$$then$$$ $$$to$$$ $$$a$$$ $$$local$$$ $$$maxima$$$ $$$and$$$ $$$so$$$ $$$on.$$$
161975222
For F, for each $$$i$$$, one can assume that edge $$$(1, i)$$$ is in the tree, and then try to construct the rest of the tree, as per hint 2. We can then check if the tree is valid.
At the start, for each node $$$v$$$, we can do a $$$O(n^2 \alpha(n))$$$ precomputation of the equivalence classes of nodes equidistant from $$$v$$$. Then, we can do each tree reconstruction and check in $$$O(n^2)$$$, for a total complexity of $$$O(n^3 \alpha(n))$$$.
-
Can someone explain the equation given in Problem E?
Hi, could someone take a look at why I'm getting WA on test case 3 of F? My submission is here.
Specifically, I create equivalence classes for each node; each equivalence class represents equal distance from said node. This is done using DSU's, which should make the whole thing have complexity $$$O(n^3)$$$. Thanks!
I got a solution for F which solves it in $$$O(n^3)$$$
Well-known Theorem: there is atmost two centres in a tree (a center is a vertex with minimum value of maximum distance of a vertex from the given vertex: this value is called eccentricity).
Note that using the given information, we can find the eccentricity of a vertex: this is just the number of equivalence classes of vertices wrt any given vertex (two vertices belong to the same equivalence class wrt a given vertex if they have the same distance from that given vertex).
So we can easily find the centre(s) of the tree.
If there are two centers, there is a theorem that they must be connected, so we find an edge immediately and proceed to construct the tree.
If there is one center, then we can prove that all adjacent vertices have eccentricity of exactly one greater than the center, and all other non-adjacent vertices have eccentricity $$$\geq e+2$$$, where $$$e$$$ is eccentricity of the center. So in this case, we can find any node with $$$e^\prime = e+1$$$ and hence, these two must be connected, and we have found an edge.
Can someone explain me why I am getting TLE on problem C in test 11 because according to me my program is running in O((n+k)logmV) (as given in the editorial) . Instead of storing frequency I am just using a count variable to check whether expansion of both array will be same. thanks https://codeforces.me/contest/1696/submission/173329854
In problem E, if you don't know the equation : $$$\sum_{i = 0}^kC_{n+i}^n = C_{n+k+1}^{n+1}$$$, here's the proof:
first, you have to know $$$C_{a}^b = C_{a - 1}^b + C_{a - 1}^{b - 1}$$$
then,
Thanks!