Hello! Codeforces Round 661 (Div. 3) will start at Aug/05/2020 17:35 (Moscow time). You will be offered 6 or 7 problems (or 8) with expected difficulties to compose an interesting competition for participants with ratings up to 1600. However, all of you who wish to take part and have rating 1600 or higher, can register for the round unofficially.
The round will be hosted by rules of educational rounds (extended ACM-ICPC). Thus, during the round, solutions will be judged on preliminary tests, and after the round it will be a 12-hour phase of open hacks. I tried to make strong tests — just like you will be upset if many solutions fail after the contest is over.
You will be given 6 or 7 (or 8) problems and 2 hours to solve them.
Note that the penalty for the wrong submission in this round (and the following Div. 3 rounds) is 10 minutes.
Remember that only the trusted participants of the third division will be included in the official standings table. As it is written by link, this is a compulsory measure for combating unsporting behavior. To qualify as a trusted participants of the third division, you must:
- take part in at least two rated rounds (and solve at least one problem in each of them),
- do not have a point of 1900 or higher in the rating.
Regardless of whether you are a trusted participant of the third division or not, if your rating is less than 1600, then the round will be rated for you.
Thanks to MikeMirzayanov for the platform, help with ideas for problems and for coordination of my work. Thanks to my good friends Daria nooinenoojno Stepanova, Mikhail awoo Piklyaev, Maksim Neon Mescheryakov and Ivan BledDest Androsov for help in round preparation and testing the round. Also thanks to Artem Rox Plotkin and Dmitrii _overrated_ Umnov for the discussion of ideas and testing the round!
Good luck!
UPD: Huge thanks to Ivan Gassa Kazmenko for testing the round and fixing some issues with statements and the round in general!
UPD2: Editorial is published!
sweeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeet
What is sweet? I don't see any sugar here...
Thanks for the downvotes!
<almost-copy-pasted-part>
vovuh is the best div.3 problem setter
This is a new announcement ?!?
Fortunately we already had a problem on this algorithm in 2018. KGP18ROL ICPC Regionals 2018 problem SORTALGO.
Btw I like the way of describing the algorithm.
Don't make jokes like that.
.
Don't even do a single pass just make a list with the first element of the original list. A single element list is always sorted.
why first?
The last element must be Stalin himself.
Good frequency of contests in August.
10 MINUTE PENALTY
oh no...
When you understand,, 6 CF contest in next 12 days..Feeling be like °(^▿^)/°
But for Chinese,we have to stay up late more night...But it worth that!
What a frequency of contest! That makes me love codeforces. We already have 6 contests this month and now we get one more. When comes to frequency codeforces have no match. It might be possible we will some more contests except currently showing in the calendar like this one.
You forgot %almost-copy_paste/>
This will be my first unrated div3 contest :) I hope one day I will say my first unrated div2 contest.
Congrats!
.
No :)
Yes, the round is rated for anyone below 1600 rating, including those who are still unrated/haven't participated in any rated rounds.
Ratings still not updated. Why?
Greetings everyone, I, as the writer of this comment and a completely new account that is obviously not an alternate account of another very high rated person on the platform Codeforces, find the website of codeforces.com particularly confusing and novel. Because there seems to absolutely nothing in the statement that tells me if this round has a probability of affecting my rating, I request that one of the more experienced members such as (insert annoying tags of lgm) tell me if the codeforces round of the number 661 variety could possibly have a nonzero probability of affecting my rating
This will be my first rated contest :/
Good Luck Everyone!
I'll be competing in this contest as well and is it okay to get cold feet? Is there any tips that anyone wants to suggest will be a big help.
There is no need to worry about anything. In fact, nobody cares if you participate or not, or for your results. Just relax, and enjoy the time.
!
Waiting for short and interesting problems.Good luck and have fun!
Question: What if my presented rating is lower than 1600 but my real rating** is over 1600 (which it is)? How will my rating be calculated?
**Real rating: after the update to rating for new users, it says specifically what rating is used for calculation and what it being presented. My actual rating, which is used for calculation, is over 1600.
Register the contest. If you are
contestant
then you are rated. Unrated if you areOut of Competition
.I think you'll be rated if your presented rating is lower than $$$1600$$$.
is there a way to see your hidden rating?
To all the bhagwa holders ..... Jai Shree Ram . Wish you all high ratings on Hindutv Divas
"Note that the penalty for the wrong submission in this round (and the following Div. 3 rounds) is 10 minutes." Can someone please explain what this means?
For any wrong submission you will gain 10 minutes penalty. Penalty is used to rank contestents with same number of solved problems.
"A tree is a connected graph without cycles. A rooted tree has a special vertex called the root. A parent of a vertex v is the last different from v vertex on the path from the root to the vertex v. Children of vertex v are all vertices for which v is the parent. A vertex is a leaf if it has no children. The weighted tree is such a tree that each edge of this tree has some weight."
Why dont you include this in the end ? Almost Everyone knows what a tree is.
My profile picture is a tree :-).
Nice problemset!
again long queue issue!!
Don't know why it came in the middle of contest. It usually comes at starting.
i will cry, if they declare this contest unrated.
long queues plz fix ASAP
This my best ever performance, please it shouldn't be unrated...
Codeforces for 3/4 of the contest and Queueforces for the last 1/4 :(
Plz fix long queue mike, this is ruin all 2hrs hard work. It is worst than occurring at the start of the contest.
Queueforces Reloading....
+15 min will not help much as every minute is counted while evaluating.
but it will not let hard word of people be soiled like mine. what else do you expect?
For past 1.5 hours, I was trying to solve problem E1 and E2 thinking that path weight for each leaf has to be less than S. :Facepalm:
Exactly Same Bro.
I also thought the same
I coded the solution of E1 thinking this was the problem. Then noticed that, the second sample case doesn't match with my "correct" output.
Yes, same here. Can you share your approach for the wrong problem we were trying to solve?
Deleted
Run a dfs and maintain a set of all ancestor edges. After reaching a node, while the sum of this set is greater than S, remove the largest element (and subtract from our current total) and insert the halved element (and sum up with our current total). Code
I am not sure if that will always give the most optimal. The problem here might be- you are not taking advantage of the edges which are near to roots in any ways(reducing them will have benefit on more number of paths).
During the contest, I didn't prove the solution that came to my mind. And yes, you're right. It's not optimal. Here's a test case:
My code will output 2 but in the optimal solution we can halve just the first edge.
By the way, what was your solution?
I tried hard but couldn't come up with any solution
My approach was a little high on memory, first for each leaf node I made a list of all the edges which make up the path for that leaf node and then sort all the nodes in each path by there their weight and then traverse this list of paths. In order to fix the path in the current iteration, start performing operations from the back for each path(as it has got the highest weight in that path) until its sum is less than s. We'll precompute the sum for each path beforehand obviously.
What do you think would be the best intended solution for that?? I was able to come with a solution of complexity O(log(n)*n*n) but couldn't optimise further
Can you share your idea. I was unable to come up with any solution that runs in polynomial time.
The maximum possible number of moves is n*log(1e6) and for each node, the maximum possible move is sub[node]*log(1e6)... considering all possible decompositions, what I intended to return was a vector of size(maximum_no_of_moves for each node) and using dp to build the optimal answer for the parent node over all its possible decompositions.
Since sigma(sub[node]*log(1e6)) is O(n^2*log(n))... the intended time complexity was O(n^2*log(n))
same here. answer for that version was 6, right?
Same here bro, just read the problem again after reading your comment , maybe its high time for me to get glasses
I do not get it. We had to find the minimum number of moves in which we select an edge and divide its weight by 2, such that at the end, path weight from root to all leaves is at most S. Is that not it? Or I misunderstood the question? Because for some reason, I cannot understand why that second TC has answer of 8.
okay, I just realised what an idiot I was for misreading the question. But seriously, can that line sum of weight of ALL paths not be made bold? I am sure a lot of people like me could not do it because of that.
Wait, it was ALL PATHS ?:/( Same boat)
i still dont understand the problem, can you explain please?
I hope you read it once.
What they meant was find weight of path from root to all leaves. Let us say the sum of all weight of ALL these paths is s. You have to reduce it to below capital S (which is given in the input). And what a lot of people(like me) understood was, reduce the weight of each path from root to leaf SEPARATELY to a value less than S.
I hope you understood
oh ok i think i understand now, i had a similar misunderstanding also
actaully im still a bit confused. In sample 2 there are two leaves, 2 and 4. The sum of the weight of the two edges connecting root and leaf 2 is 223, and root to leaf 4 is 65. The sum of these two is 288, which can be reduced to less than 50 in well under the 8 moves. So i still dont quite understand.
it is 8 moves. You will have to use a max priority queue. So, each time pick the edge with most weight, divide by 2 and insert it back and keep doing it till total sum is less than S. In this case the sequence will be — 123 -> 100 -> 61-> 55 -> 50 -> 30 -> 27 -> 25. After this edges will be having weights as 15,10,13,12 which makes the sum = 50.
Ohh i got it, thanks a lot!
Looking for the solution in the comments and saw this, I feel so dumb...
Same lol, was thinking why is the third hardest problem in a Div3 contest that hard.
Comparing last few contest problemset,This one is well distributed.Awesome
I think D was a subproblem of 1370E - Binary Subsequence Rotation.
ABCD is simple and good.
E2 is a good problem after E1.
i solved D and i couldn't solve C :( , please anyone can help me?
Iterate over all pairs to find all possible values of $$$s$$$. For all values of $$$s$$$, find the number of pairs with that sum. Use hashmap to find that in linear time. Print the maximum no. of pairs formed.
sorry!
this my code: https://codeforces.me/contest/1399/submission/89043997
How to solve E1?
After calculating the value just greedily take the vertex that gives biggest delta(v). delta(v) = (num[v] — num[v] / 2) * leaves[v] where leaves[v] = number of leaves in subtree of v.
Run a DFS to store, how many leaves are there in the subtree of a node
Now store {impact, index} of each edge in priority queue
impact for edge {a,b,w} (a is parent of b) is $$$sub[b] * (w - \frac{w}{2})$$$
Greedily calculate the answer.
I did the same thing, yet I got tle.
Then the issue must be in implementation, it gave me an AC.
I implemented my solution using a priority queue like you said and it was accepted. However, during the contest I used a list for all the possible moves and sorted it once and then did greedy, but I got TLE. Any reason for why adding the edges into a priority queue and then popping is significantly faster than adding to a list and sorting at the end? If I'm not mistaken both should be O(n*log(n)) time complexity.
I'm also using Python which could slow down my times, but priority queue implementation was still much faster...
what was your approach for D?
you can solve it using std::set
Binary Searcch! IF number of subsequence is k and the answer exist then with k+1 subsequences partition to the answer exist , this was key intuition behind this problem.
Can be solved in O(n) by greedily placing each digit.
Ya Can be solved in Linear time thanks
How did you check for each k ?
Take an array of the current k size b[k], visit each index of the array abs see current known subsequences end with '0' or '1' if the current index is '0' and currentsubsequence i is also ending with 0 then you have to use a new subsequence. this way iterate whole sequence and at each iteration count how many subsequences are formed till now. this value is always equal so no need of binary seach , thus it is solvable in linear time read comments above.
Can You share your code ?
Approach 1 :- I think this can be hacked. 89017500
Approach 2 :- This is fully correct:- 89059363
Approach 3 :- This is optimisation of approaach 1 but it is wrong I dont know why?
89053986
Maintain two sets v[0] and v[1](initially both empty). v[x] denotes the set of all those subsequence ids for which the next expected number is x. Iterate through the string. Let's say the current bit is 0, then if v[0] is non-empty, take out one of the element of v[0] and put it into v[1]. If v[0] is empty, create a new subsequence and insert it into v[1].
Time complexity: O(n) using vector.
At any point you should be knowing if a particular sub-sequence is having 0 or 1 as its last element
this can be done with the help of set. Just make 2 sets for 0 and 1
Then when iterating through string if the element we need to work on is 0 than check if there is any sub-sequence with last element 1. Than we can insert 0 in that sub-sequence than update our set of 0 and 1. If no such sub-sequence exist than find the maximum sub-sequence we are having right now and add 1 to it and update our sets one again.
this is O(n) solution https://codeforces.me/contest/1399/submission/89002346
looks pretty cool can you explain the logic?
I used 2 std::queue to store the subsequences that currently end with 0 or 1. Iterate over string, if it's a 0 and queue of subsequences ending in 1 is empty then you need a new subsequence, so assign this digit to the new subsequence,new subsequence number is num and add the num to queue of zeros, if the queue of ones is not empty then you can add this digit to any subsequence stored in queue of ones (use q.front()) and then you need to pop this sequence number from one and add to zero. because you just added a 0. Do the same in case you encounter a 1.and always save the answer of ith character to ans arrays ith position which is num.
My Video Solution for Problem C Problem D
Editorial please!!!!
Can someone help me, please. I cant understand why i get WA on test 2 on D
89051411
Your code provides WA for this case:
011011
Thanks
try this test, you will get where you are going wrong
Thanks
E2 and F are nice questions, looking forward to solving them in practice.
Nice English, dude.
nice bulling dude
Nice handle, dude.
Probably I haven't managed to understand the problem statement but why is the solution of the second graph in problem E1's example 8? I mean we have 1-3: 100, 2-3:123, 1-4:10 and 4-5:55 If we make 100->25(2 steps) 123->15(3 steps) and 55->27(1 step) then 1-4-5 is 37 and 1-3-2 is 40 which is below the s = 50 limit and it's 6 steps altogether which is less than the claimed solution of 8. Am I missing something here?
The sum of the weights of all paths from the root to the leaves should be at most $$$S$$$.
Ah, not each of them root to leaf path individually but the sum of them all. Ah why can't I read the problem statement... Thanks for the clarification!
For E1, I got TLE. The idea was to find how many leaves each node has and do a weighted sum accounting for those leaves (so real_weight[vertex] = weight[edge_to_vertex] * number_of_leaves[vertex]). In the end, I stored the elements in a set and kept reducing the largest element by a factor of 2.
Could someone suggest what could be causing the timeout issues? Link
You're using memset t times which will tle if t=2 * 10^4.
Thanks I’ll try again in a bit.
Did anyone get WA on test 2 on problem D ?
edit — Shit I forgot an 'else' keyword :'D
Try this test.
How to solve E1 ?
I used priority queue but it gave WA in test set 2. If possible, you may look into my code. Thank you in advance.
... import java.util.*; import java.io.*;
public class Weighted_Divisions { static class FastReader { BufferedReader br; StringTokenizer st;
// // for (int i = 0; i < n; ++i) // System.out.println(pair[i].children);
// System.out.println(sum); count++; pq.add(a); }
}
I made the same mistake, at this line:
PriorityQueue<long[]> pq = new PriorityQueue<>((o1, o2) -> (int) (o2[1] * o2[0] - o1[1] * o1[0]));
, your comparator should maximize the amount reduced per operation,(weight of edge + 1) / 2 * number of leaves in subtree
, not the amount the edge is currently contributing to the sum. There's a difference between the two.My Submission
Thanks a lot. It helped.
I solved E1 using priority queue using a custom comparator function. Can I use a similar approach with 2 priority queues(one for each cost type)? Please help a bit!
Nice problems! Thank you for great round! Waiting for editorial :)!
Hello! Firstly, i want to say that in my opinion this was one of the most balanced rounds that were recently organized and it had a really good problemset so i enjoyed it a lot, nice work! Then i wanted to ask the community whether this extension for rating prediction works good and determines correctly the rating change after a round : https://cf-predictor-frontend.herokuapp.com/.
Yeah! the rating change will be more or less the same as shown by that predictor. Kudos! you did a great job today!
Thanks a lot!
Do we get the editorial after the hacking phase?
My greedy implementation of D was too messy and I couldn't even pass the sample test cases. Is there a cleaner implementation of D?
https://github.com/actium/cf/blob/master/1300/90/1399d.cpp
Thank you
I used two queues, very short implementation
Thanks. Seems much better
Kudos to BledDest, Gassa and MikeMirzayanov for excellent round coordination!
And as usual Python screws me over with TLE in E1 after a textbook solution.
I approached problem D using binary search and set data structure but it TLEd unfortunately. I just wanted to know if my implementation was wrong or it was just too slow to pass the time limit. I kept two different sets for indices of ones and zeros. I take the smallest index from one set and then binary search the next index from the other set until there are no more elements that can be assigned to the same subsequence. Here is my submission link.
Here is your accepted code 89057458
I am not sure but I think for set
lower_bound(a.begin(),a.end(),x)
works in linear time.you need to use
a.lower_bound(x)
;Thanks. Did the same XD
Why regular codeforces.com not opening from laptop? I cannot open the site for last four hours. I had to even give contest on m1.codeforces.com and writing the comment from phone. It still isn't working. Is it only me or anyone else?
Very nice problemset
Why dsu didn't work in problem F.
My submission 89050221
Idk what ur approach is but it's supposed to be pure dp. Maybe you are allowing segments to intersect inside of one?
i am finding all invalid pair and add a edge between them ...then i am finding the maximum independent set;
How to solve E2 ?
I can try to explain and prove my solution ^_^
Let's define impact of an edge as $$$(w-\left\lfloor\frac{w}{2}\right\rfloor)*l$$$, where $$$w$$$ is the current weight of that edge, and $$$l$$$ is the number of leaves in the subtree of that edge. So $$$w$$$ occurs in
sum
$$$l$$$ times.Now let's find out, after performing, say, $$$i$$$ moves on edges with $$$c==1$$$ and $$$c==2$$$, the maximum sum of impact we can obtain. The resultant
sum
obviously decreases by sum of impact.Now, we do dp. $$$dp[i].first=max(dp[i-1].first+b1[dp[i-1].second.first+1], dp[i-2].first+b2[dp[i-2].second.second+1)$$$
where, $$$dp[i]=$$$ {maximum sum of impact for cost $$$i$$$, {number of moves with $$$c==1$$$ type edges, number of moves with $$$c==2$$$ type edges}}
$$$b1[j]=$$$ maximum sum of impact with $$$j$$$ moves of $$$c==1$$$ type. Similar for $$$b2[j]$$$.
Now an insight: The number of moves is bounded above by $$$nlog(nW_{max})\leq50n$$$ for the given constraints.
So now we can just do linear search and output the answer just when
sum
$$$-dp[i].first$$$ becomes less than or equal to $$$S$$$.Even though DP relation is intuitive, here is the proof:
Obviously, the $$$j^{th}$$$ move can either be $$$1$$$ or $$$2$$$ type, so we max over those. Now, lets say for $$$1$$$ type, we do not take the $$$(dp[j-1].second.first+1)^{th}$$$ $$$1$$$-type move:
UPD: A part of the previous proof was wrong. Here is the complete correct proof for the DP recurrence.
Structure theorem: Let number of $$$1$$$ and $$$2$$$-type moves $$$(n_1[i], n_2[i])$$$. Then for adjacent elements
$$$|n_1[i]-n_1[i-1]|\leq1$$$
$$$0<=n_2[i]-n_2[i-1]<=1$$$
(can be proved by easy induction)
From here, we can easily compare the optimal solution and the solution of our DP+Greedy.
This proof took me very long (almost all night) so please upvote if you found it useful ^_^ 89052626
Hi , your explanation was really clear. I came up with a solution inspired from yours, I used your idea to calculate b1[] and b2[] where b1[i] is the maximum impact while using i operations of cost 1 (and cost 2 for b2[i] ).But instead of dp , I binary searched for the number of operations , which is < nlog(nWmax) as you said , for a fixed number of operations , say x , there is x+1 ways to construct a solution : take i + (x-1)mod 2 from cost 1 and (x-i) / 2 from cost 2 with 0 <= i <= x . For each of these possibilities , we can calculate the maximum impact which is y = b1[i + (x-1)mod2] + b2[(x-i) / 2] , if Sum_impact — y <= s for any i then x is valid. It works in O(nlog(nWmax)*log(nlog(nWmax))) Hope i explained it well .
Thanks.
Actually, what you came up with is coincidentally also the editorial's solution. Also, I have not come across any other DP solution so I hope it passes systests :P (although the proof seems solid).
Can someone tell what's wrong in my solution for problem C?
My submission:89060049
Your code give 2, but it should give 3.
How to solve F ?
will there be a tutorial for this contest like the last one??
Yes
Can E2 be solved using priority_queue (maintain 2 priority queue each for cost1 and cost0)? I solved E1 using a priority queue. Here is the E1 submission. If anyone had done E2 using priority queue please let me know. Other approaches are also welcomes********
You do same solution as same E1 except with two different priority queues, one for each cost, storing amount in vector after x turns, then just do 2 pointers.
My messy solution for E2 using just greedy approach with 2 priority queues 89027488. Not sure if it really works though.
Can you tell me what I have done wrong. here is my submission it is giving WA on the 21st test case.
My main idea is to use 2 coins in every iteration so either of priority queue is used and in the end if only one coin can give me total<=S then break or if one 1 coin and one 2 coins can give me summation<=S given that no 2 of 1 coins can be able to do so then break. Isn't this idea has to be working?
E1 and E2 can be solved using the same approach, with some extra work in E2. I didn't use priority queue though. In E1, I sorted all the contribution of all edges. Now the problem is what length of prefix of this sorted list you need to take to make the total weight $$$\leq S$$$.
In E2, I made two different sorted list, each for one of the two cost types. Now for each prefix length $$$k$$$ of the first list, find out the cost of taking $$$k$$$ moves from this list and remaining from the other list. The minimum cost is the answer.
Please tell me what's wrong in my E1 solution
My solution for F.
Assume there is an interval $$$[0, inf]$$$ at index $$$0$$$, which we can select into the subset always. Now we can express a valid solution as a rooted tree, where the nodes represent the selected intervals (the root will be interval $$$0$$$). An interval $$$p$$$ will be the parent of an interval $$$u$$$ if $$$u$$$ is inside $$$p$$$ and there is no selected interval $$$q$$$ such that $$$u$$$ is inside $$$q$$$ and $$$q$$$ is inside $$$p$$$. In a valid subtree, the children of interval $$$u$$$ will be non-intersecting. Suppose, $$$f(u)$$$ is the maximum number edges in a valid subtree of interval $$$u$$$. Then the answer to the problem will be $$$f(0)$$$. We can solve the problem with DP.
Let's compute $$$f(u)$$$. Let, $$$C$$$ be the list of intervals that are inside $$$u$$$. Sort them according to the right-bound of the intervals. A subsequence of these list will be the children of $$$u$$$ in the tree. If the rightmost child is the $$$i$$$-th interval in $$$C$$$, the answer will be $$$g(i) = 1+f(C[i])+max_{r[C[j]] < l[C[i]]}g(j)$$$. If we loop over all such $$$j$$$, the solution will be $$$O(n^3)$$$. We can optimize it to $$$O(n^2\log(n))$$$, by noticing that all such $$$j$$$ form a prefix in $$$C$$$. We can keep the prefix max of $$$g$$$ and find the rightmost possible $$$j$$$ for an $$$i$$$ by asking a lower bound of $$$l[C[i]]$$$ on $$$C$$$.
89025065
Can someone explain the approach for Problem C-Boats Competition?
My approach was as follows: possible values of s are 2 to 100 for each possible value of s I tried to find the possible pair count based on the given weights and finally took the maximum.
https://codeforces.me/contest/1399/submission/89041975 Hope it helps!
Same idea but with multiset solution
My video solutions to all problems + screencast (where I placed 17th? place). Sorry for uploading it that late (currently slow internet connection)
Enjoy watching.
https://youtu.be/yWD-Hcl_0Sk
UPD:
After a system testing, I'm 15th
why this hack is giving invalid input for problem c?
My c++ code is working fine in local ide, but in Codeforces it is showing runtime error on test case 1.Can anybody help me to figure out where is the problem & how to fix "uninitialized value usage" error.Here is my submission link: [https://codeforces.me/contest/1399/submission/89073584]
It must be
ar[some const]
. So change it toar[max value of n]
.Thanks for the great round!
i will be expert after this round .thanks for arranging such a round
when will the results be? I'm a beginner:)
i don't know but immediately it will be published
You can use CF-Predictor to see how many points you are taking or losing. You find it there
congrats brother <3
why my code is giving TLE for E1 can any one help please 89089825
Check for the case 5 100 1 2 10 2 3 10 3 4 10 3 5 10
If the frequency of nodes visited is correct in your implementation (dfs1). I think the fault is there
Maybe because of map. You can use arrays instead of map.
What is the Time Complexity of this Solution ? Can anyone help ? For D : Gave Tle on 6
Same idea for lower_bound
when the ratings will update, i have been waiting for 1 hour. :(
UPD: sorry for my impatience, and thx for this gr8 contest)
Can someone look at my solution for problem E1 . Idea is exactly the same mentioned in the editorial but getting wrong answer on test case 5 . Couldn't figure it out myself.
https://codeforces.me/contest/1399/submission/89098897
Can someone look at my solution for problem E1 . Idea is exactly the same mentioned in the editorial but getting wrong answer on test case 5 . Couldn't figure it out myself.
https://codeforces.me/contest/1399/submission/89098897
My solution for E1 failed system tests on test 21 (WA). I would really appreciate if someone could help in debugging. TIA
Submission link : https://codeforces.me/contest/1399/submission/89036745
Can someone help me out with my submission for the E1 problem 89161963. Its producing wrong output for test case 2 and apparently there is some problem with initialSiz( i.e. the variable for deciding if zero moves would suffice).