Author: crackersamdjam
Hint 1
Hint 2
Hint 3
Tutorial
Solution
Author: Dormi
Hint 1
Hint 2
Hint 3
Tutorial
Solution
Author: Ninjaclasher
Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution
Author: Plasmatic
Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution
Author: Plasmatic
Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution
1534F1 - Falling Sand (Easy Version)
Author: Dormi
Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution
1534F2 - Falling Sand (Hard Version)
Author: Dormi
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Tutorial
Solution
Author: Aaeria
Tutorial
Solution
Author: Ninjaclasher
Tutorial
Solution
Thanks for quick editorial.
Here are video solutions to A, B, C, D, E, F1, and F2 for people who prefer that style.
I prefer both, Thanks :)
King.
This is hard :(
B solution: should it be $$$a_i > a_{i-1}$$$?
While ai−1 < ai > ai+1 reduce ai
Thanks for updating.
a[i] should be grater than a[i-1] and a[i+1]
Why I am getting WA on test case 4 in problem C. 119395523 . I used dfs and printed 2^(connected comp).
Try running the initial for loop from 0 to n. [0,n].
thanks got it. I had to use 0 based indexing..
Unlucky mate. You got the logic right though.
will learn from this mistake and improve next time
u messed up some where in counting CC, just check it once.
in question B editorial it is mentioned that
"observe that decreasing a column will never affect whether it is optimal to decrease any other column, so we can treat the operations as independent."
Why they are independent, Why it will never affect other columns?
Initially consider just column
i
, all the columns in the range[1,i-2]
and[i+2, N]
won't be affected (since anything you do to columni
doesn't change the outline of these columns).Since height of
i
column would be greater than thei-1
column (for any updation to height ofi
based on the algorithm), right outline ofi-1
column won't change (left outline also doesn't change, by induction i.e. usei-1
as the central column). You could similarly argue for thei+1
column.I tried a different approach for D, which I am not able to prove. So it would be great if someone can have a look and either prove or disprove it.
I took vertex 1 and rooted the tree on it. Then I calculated the levels of the other vertices for this rooted tree. Root is at level 0. All nodes at level 1 are children of vertex 1.
From here, we will go to all the next levels from level 2 onwards in increasing order and query for all the nodes whose parent we don't know. From levels of the vertices and the array we get after querying a particular vertex, we can calculate the children of this vertex, the parent of this vertex, and the siblings of this vertex. We assign these values and move towards the next vertex.
Link to my code
This approach is same as given in tutorial. As you start from level 2, at this level you will get parent of all nodes at level3 so you would never query about nodes at level3 and go to level4 and will continue in similar fashion
No, I may go to level 3 as well. I am querying those vertices whose parents I don't know currently. If there exists a sibling for a node, I will calculate the parent of sibling as well. Now, I will not query this sibling node as its parent is known and it may have children which will be in level 3. So when I reach level 3, I will query for the children.
You are basically querying on all the even level vertices. Which is a form of bipartite graph. But it might get hacked. Say the total number of nodes present on even levels is more than n/2. How are you tackling this? I queried on even/odd nodes whichever set is smaller.
Read this comment as I may not have been very clear in my explanation.
Yes I also used the Same Approach . The Correctness of this method can be proved by simply Saying that for any edge in the tree this approach will ask for max 1 node as a query (0 in case of that edge which connects it's parent and Sibling ) .
Thus the maximum Number of Queries asked in the process will definitely <= ceil(n/2)
and upper limit will be reached when no node has any sibling , straight structure like 1->2->3->4->5That makes sense. Thanks.
even i did it with the similar approach i tried on three four examples ans its working fine. This seems intuitive to me as for non leaf nodes we are querying only one time for finding parent and in this query only we also get info of one children also so we basically covered two nodes at least in one query it can be even more if node has siblings so that's why number of queries will always be less than ceil(n/2). Also if any particular depth has only single node than we don't require any query to find parent for all the nodes at depth+1 as there is only one option for parent at previous depth.
my submission - https://codeforces.me/contest/1534/submission/119432261
hey, we got a similar approach. I tried to prove it here.
Nice problemset. "Lost Problemset"
*bugabooset
जय हनुमान ज्ञान गुन सागर। जय कपीस तिहुं लोक उजागर।।
Nice questionset.
So interactive :D
119391919 Can someone please tell me why I'm getting TLE in problem C?
You're creating array of size 400005 for every test case. There can be a lot of test cases so you're basically trying to create 4*10^9 integers which will take too long.
liked the beauty of problem D, great job!
Yes! Exactly! It took me one hint to solve though.
For E, I think we can also do greedy:
First, if $$$n$$$ is odd and $$$k$$$ is even, we can never win.
Notice that two moves always results in even number of bits flipped. So if $$$n$$$ is odd and $$$k$$$ is odd, then we will have odd number of moves, so start by turning on some $$$k$$$.
Now we are left with turning on even number of $$$0$$$ bits.
Let's denote $$$action1$$$ as turning on some $$$k$$$ bits using one move, and $$$action2$$$ as using two moves to turn on $$$x$$$ bits, where $$$x$$$ is even, and $$$2\le x\le min(2k, 2(n-k))$$$.
If $$$action2$$$ is unavailable(there are no possible values for $$$x$$$), and our current number of zeros is not divided by $$$k$$$, we lose.
Otherwise, if $$$k$$$ is odd we can just do action2 greedily, and if $$$k$$$ is even, before the greedy, we might need to do a single $$$action1$$$ (this is easy to check).
Code: 119401441
Kinda different solution to E. I'm sad, tired, and hopelessly hardstuck, so this is the short ver
The answer is NO iff $$$n$$$ is odd and $$$k$$$ is even. The if part is true because we can't distinguish between any two arrays $$$a$$$ and $$$b$$$ such that for all $$$i$$$, $$$a_i \oplus b_i = 111111...$$$.
Let's show that it's possible otherwise. The case where $$$k$$$ divides $$$n$$$ is trivial.
Consider all other cases except for ($$$k$$$ is odd, $$$n$$$ is even, $$$2k$$$ > $$$n$$$). We can greedily form segments of length $$$k$$$ until there is an even number (let this number be $$$r$$$) of elements remaining such that $$$r < 2k$$$. It takes $$$2$$$ more queries to solve the problem. We can show that this is optimal via a parity condition.
In the case ($$$k$$$ is odd, $$$n$$$ is even, $$$2k$$$ > $$$n$$$), we can set $$$k = n - k$$$ and solve the problem as in the former case, except that we query the complement of things. Pretty sure this is provable (the problem becomes strictly harder if you're given the complement xor, and you can solve it in the optimal number of moves in the equivalent easier version), but my head is foggy :/
I think solving it this way is very impractical tho, the bfs solution should be the "correct" one to think of during contest
Can someone explain why my O(N^2) solution give TLE in problem D? My Submission
Let the current printed vertex be r. For those vertices v where d[v] == 1 has an edge with r. So we set graph[r][v] = true.
Also, if we get sequences like 0 1 1 2 3 4, we can say the there is definitely an edge between 2-3, and 3-4 because there is only one shortest way to go from 2-3 and from 3-4. So we can keep track of distance values which occur only once, (0, 2, 3, 4) in this case and their corresponding vertex ({0, 1}, {2, 4}, {3, 5}, {4, 6}). Now we will loop through this array and add edge between two adjacent elements (element.second) of this array, if the difference in their d values (element.first) is equal to 1.
To decide the next r value we look at the distance value (d[v]) of each vertex and count of elements having distance value (d[v] + 1). We want to select a vertex v such that the count of element having distance (d[v] + 1) is maximum.
I am not sure if the my approach itself is accurate, but still O(N^2) solution must not be giving TLE.?
Thanks.
Please, someone, explain 2nd problem, Thanks :|
Basically, for a given array ugliness is sum of abs(arr[i]- arr[i+1]) (insert 0 at front and back of the array). We can perform an operation which is decreasing arr[i] to arr[i]-1 provided arr[i]>0. ugliness in increased by 1 on each operation
We need to minimize the ugliness of the array by performing the operations >=0 times or 0<=i<=n.
Benq orz.
Benq will break the highest rating achieved in the history of codeforces after rating changes.
I really enjoyed the round, I think it was well balanced.
I wonder why so many contestants including me got WA on F1. And my solution is similar to toturial's (using strongly connected components) . That confuses me. (Ahh...I spend more than an hour during the contest on debugging my F1 code!!!)
Here is my submission
Can somebody provide some details to be careful on F1? Or some special test cases?
Hi Dormi, What I have done is that I created a directed graph as mentioned and then choose a vertex with in-degree 0 and started DFS, and marked all reachable vertices as done, Then choose another vertex with in-degree zero and so on, My answer is the number of times I start DFS. Why would this solution give Wrong answer? Why do we need to find SCC instead of doing DFS on the fly?
I start with
0
indegree vertices but try to visit all of them, here is my submission. Please help.Consider two rows in distance 2, each completly filled with sands.
Both rows each build a circle of dependencies (an SCC), the above row triggers the below row, and there is no vertex with indegree 0.
To see that the whole construct is triggered with only one operation we need to somehow see that the triggereing must start in the top row. That is, a SCC with indegree 0.
Yes, but in this case, my code reports correctly because since there are no vertices with in_degree = 0, it tries to visit unvisited vertices starting from vertices with greater height.
The case where my code will fail is when an SCC at a lower height triggers two or more SCC's at a height above it.
Greater hight is not a good criteria, as seen in example 2. We can construct grids where we must not start from greatest height.
Something like this:
Yes for your example my code doesn't work. Thank you.
You could use topological sort here, instead of going with the indegre==0 idea
Are you suggesting that instead of shrining SCC into one node and then counting vertices with zero in degree, we can just do a topological sort and find the number of in-degree 0 vertices?
We don't exactly need to find nodes with indegree==0 as a starting node. Just topological sort it and then start from reverse. This will make sure you are using the least number of nodes as a starting point.
is it possible to get different verdict in pretest and submission?
Yes, it happened to me twice.
Can anyone please tell why i am getting wrong answer on pretest 2? 119361708
There can be no 'R' or 'W'. Your code fails on test cases when all characters are '.'
Now, the same code got accepted, does that mean the testcase with only '.' is not there ? Here it is 119409968
Emmm, last time I use a 3*3 grid only has '.' on your code 119361708 , and it outputs wrong answer. But now your code 119409968 can output right answer.
So it must have been changed something... (doge
Please can anyone tell me what does "set" mean in "bipartite set" in problem D?
what are the Algorithms i need to learn so that i can consistently solve A and B problems Ps: newbie:)
none. That's the good part.
You don't need to learn many algorithms, A, B problems are mostly ad-hoc, just solve Div2 A, B, C problems from previous contests and you'll be good.
In problem E does d, the maximum no. of queries also varies with n and k? I thought we just have to make it within 500 queries irrespective of n or k. T_T
One of the best Editorial for Problem E. Lost Array, I could find on the net is this one by MagentaCobra, Solution for Lost Array (E).
Thanks! Really happy to see my editorials help others :D
Wonderful explanation! Thanks a lot!
In problem D, sample test case 2. How can we tell for sure after just one query? It's given the tree is
? 5
gives2 2 1 1 0
. So can the tree be not like this?Yes really confusing. It is just authors example, they can know the solution without even attempting:))
Testcases in interactive problems doesn't make any sense. They are there just for helping understand interaction protocol, without giving hint to solution.
Update: Got the wrong case:
What's wrong in my code 119358197 ? got Wrong Answer test case 4. my logic was exactly same as editorial .
Did anyone solve problem C using maths, like a single formula or something?
Sorry for that, I think sieve and modular exponentiation was the only math in it, no single formula.
He is using DSU. I am asking if it was possible to solve it using only some mathematical formula?
Can anyone explain intuition behind hint 4 for problem D
why can't it be
<=floor(n/2)
symbol? any counter example? UPD: problem solved, the editorial is updated.number of black and white can obviously be same if n is even.
Sorry for asking wrongly, I am confused with the ceil(n/2), why can't it be
<= floor(n/2)
.It is =ceil(n/2) in case of equal parts
It will be =floor(n/2) in case of almost equal parts. eg n=4 min(2,2)=2 , n=5 min(2,3)=2 <-(<=floor(5/2))
In problem B why it is not optimal to reduce the height of a[i] a[i]>a[i+1] when i==0
consider case n=2 5 1 initally ans = 5+4+1
1 1 now ans = 4+1+1
or I missing something.
It is optimal. You just reduced the ugliness score from 10 to 6 right?
yeah, but why in the editorial this condition is not given if(a[0]>a[1]) a[0]=a[1]
the condition for optimal ans given in the ediorial is only when (a[i]>a[i-1] and a[i]>a[i+1]) but what about the end points
Yeah it should be a[i]>a[i-1] or a[i]>a[i+1].
https://codeforces.me/contest/1534/submission/119444809
Any ideas why?
I figured it out. It turns out it's because I loop till the bottom of the table for every sand block I have. I found an optimization for this with storing the first block of sand below each block with DP.
A possibly easier way than DP (and I used this in White Pawn from a recent Atcoder contest) was to store the values in a set, and use
lower_bound
/upper_bound
depending on what you want. It's slower to run, being $$$O(\log n)$$$ rather than $$$O(1)$$$ after pre-calculation, but it's faster to implement, and that extra computation time is almost always ok.Can you please elaborate? Store which values
In my solution, I had $$$top_j$$$ store the values of $$$i$$$ for which there was a block of sand in $$$grid_{i,j}$$$. So I can easily query the next block under $$$(i, j)$$$ with
top[i].upper_bound(j)
(upper_bound because lower_bound will give $$$i,j$$$)Oh, I got it. Thanks! But tbh for me DP was easier to implement XD But your solution is definitely useful in many other cases
The hints before the solutions are very helpful . I think every editorial should have hints .
I think my solution for D seems more creative/easy because you dont need any experience with standard/similar problems, so I am putting it here
So we will root the tree at 1 and evaluate parent for each node.
first we will ask for depths (i.e. distances from 1), and for all nodes with depth 1 we will set 1 as their parent
Now, let's iterate over all nodes in increasing order of depth.
Whenever we encounters a node (let's call it r) without a parent assigned we will make a query with r, and set it as a parent of all nodes with distance 1, except one which already had a parent, let's call it p. Now take r and other nodes which have a dis of 2 with r and same depth as r, set p as their parent.
Proof:
code
I only had a blurred picture of it in the contest :(
Please correct me if something was wrong, and also feel to ask.
I wasn't able to take this contest, but I just did a virtual and I have to say — These problems were brilliant! They were unique, easy to understand, and not too complicated to implement (even F1, if you just look at the solution, it seems complicated, but it's really not).
My solution to E was much more direct than using BFS. The first observation is that each query just gives you a linear equation (mod 2) containing $$$k$$$ of the variables. Then, in order to find the XOR-sum of everything, it's necessary and sufficient to make some queries such that a linear combination of them is the desired XOR-sum. In particular, because we're working mod 2, the coefficient of each equation is either $$$0$$$ or $$$1$$$, and we can just not query the 0's, so we need exactly a set of queries such that the total XOR is the XOR-sum. In other words, we need to query each variable an odd number of times.
Now, the problem is much simpler: first, consider a number of queries $$$d$$$, so that we will make exactly $$$d$$$ queries each with exactly $$$k$$$ distinct variables. Then, if we fix the frequencies of variables, the greedy strategy "always query the $$$k$$$ variables with the greatest remaining frequency" is guaranteed to work unless the max frequency is $$$> d$$$, in which case there is no solution. Thus, we want to minimize the max frequency while keeping all frequencies odd and having sum of $$$dk$$$; this is achieved by making the frequencies as similar as possible, either $$$1+2\lfloor (dk-n)/2/n \rfloor$$$ or $$$1+2\lceil (dk-n)/2/n \rceil$$$. In particular, $$$d$$$ is valid iff $$$dk \ge n$$$, $$$dk \equiv n \pmod{2}$$$, and $$$1 + 2 \lceil (dk-n)/2/n \rceil \le d$$$, so we can find the minimum $$$d$$$ with a simple linear search.
I was thinking of something similar and got all the inequalities for necessary $$$d$$$. But then I got stuck at step of showing that the $$$d$$$ is sufficient (the greedy strategy you mentioned). Can you give some more insight on why it works and how did you think of that? Thanks!
The intuition is just that having smaller groups is better than larger groups because small groups can always be split into separate queries, but large groups can get stuck, so you want to prioritize cleaning up the largest groups. The k=2 case is definitely quite well known: you can form pairs of distinct elements as long as there's no single majority element. You can prove this formally by inspecting the greedy algorithm, or you can use a max-flow-min-cut/Hall's marriage theorem argumrnt.
F1 is similar to https://dmoj.ca/problem/wc99sp5.
In question D I'm getting an error "wrong output format Unexpected end of file — int32 expected". I don't understand if I used wrong approach because my code works fine on small test cases and counter test cases. If someone could help me resolve it that would be a great help. Thank you. My code: 119541888
Problem B would have been even more interesting and difficult if height of bars could be increased as well. Which sadly how I misinterpreted it.
In problem G, $$$dp[x][y] = (\min{dp[a][b]} \forall y-(x-a) \leq b \leq y+(x-a)) + (\sum{\frac{|y-z|}{2}} \forall potatoes (x,z))$$$ all such b's aren't reachable, won't that create any problem? Also what should be our intial function, I think it should be like $$$f(0) = 0,$$$ else $$$\infty$$$. Somebody please help me out.
Many times, you can analyze the version of the problem where positions are allowed to be reals rather than integers, and then prove they have the same result. This problem has that property, because all inflection points are integers, so you might as well only visit integer points.
Your initial function is correct. It's actually enough to treat it as $$$f(x) = (n+1)|x|$$$.
Thanks a lot!! I understood it now. Actually what I understood is :
The answer always occur at one of the given points in the final graph(we need not think that whether we have our calculated minimum achievable or not)
Whenever we expand our graph about the center, all the points in domain get their new values from the points which were in domain previously(here I want to say that they are not taking any junk value from the previous states which were not achievable). This can be seen from parity of points and the offset we have. Actually this was the main confusion for me.
Basically at every transition all the points in domain are correct and we do not care about others, and in the end we will surely be having some point (which was given intially) on the global minimum line.
Please correct me if I am wrong somewhere.
Its mysterious to me. Please point out what is wrong in this easy problem A-solution.
This loop checks only positions with same parity, but should check all positions.
In F2 we need to mark the minimum number of nodes in a DAG such that a set of "special nodes" can be reached by at least one marked node.
Because any special node can be reached by a contiguous range of other nodes we can solve this problem efficiently in this case.
Is this problem NP-hard in a general graph?
The Set covering problem (SCP) can turn into the task that you said. Since the SCP problem is a HP-hard, so the answer is yes, it is a NP-hard problem in a general graph.
For problem F2 to solve the last task like covering every special node via segments we can use easy dp that seems to work in O(size^2) but actually the sum of all segments` length is O(n). Here is my solution https://codeforces.me/contest/1534/submission/120320191
For the Problem G,if offsetting all values in the left of the min point by −(j−i) and offsetting all values of the min point by (j−i) to get fj from fi,fi shuold be a convex function.but we need add fi to gi,how to prove that fi+gi is a convex function and can support the next operation?
That is to say how to prove that we can get a convex function when we add a convex function onto another convex function.
Ok,I know how to prove that now. Thanks.