Tutorial is loading...
author: Hori
Code: 288397475
Tutorial is loading...
author: willy108
Code: 288397529
Tutorial is loading...
author: Hori
Code: 288397556
Tutorial is loading...
Code: 288397595
Tutorial is loading...
author: lunchbox
Code: 288397611
Tutorial is loading...
author: turtletortles
Code: 288397640
Tutorial is loading...
author: sum
Code: 288397661
Tutorial is loading...
author: Benq
Code: 288397685
Tutorial is loading...
author: Hori
Code: 288397713
Okay so D is done by ChatGPT but also oursaco. oursaco, if you don't mind, can you please tell a little bit about how you contributed to the creation of that problem? Like did you just write the prompt or did the chatGPT stuff have to be revised?
Asked chatgpt to make a problem which was initially solving the problem for a single array with no i < j constraint (this was an existing div2a). I thought the i < j constraint would make it more interesting and solved for it. Then the solve for every prefix part was added. Also made sure to plug it back into o1-preview to make sure it couldnt solve XD
Code is not accessible.
Fixed, thanks for letting me know
cant see tutorial of D, but is it possible to solve D using knapsack?
I didn't solve it, but I think D is to be solved using the observation: if $$$a_i=2^{p_i} \cdot {q_i}$$$, there should be $$$p_i$$$ operations that greedily decrease $$$A_i$$$ and greedily increase the member of the prefix with greater or equal index and max $$$q$$$.
With this, we can compute the optimal sum for each prefix separately in quadratic time. Or, we can compute for each prefix using the the previous prefix in linear time (somewhat like DP).
Essentially, we can process the $$$i^{\text{it}}$$$ prefix as adding $$$A_i$$$ to an array $$$p$$$. By adding $$$A_i$$$, we offer a new $$$q$$$ to all of the elements that precede it. We obviously want all of $$$A_1, \cdots, A_{i-1}$$$ that are using their operations to increase an element with a lesser $$$q$$$ to instead increase $$$A_i$$$. All of the elements that precede the nearest element with a greater $$$q$$$ will not change, but those that follow will now use their operations to increase $$$A_i$$$. We can find the nearest greater element with a greater $$$q$$$ using monotonic stacks, which are well explained by the USACO Guide or CPH. With prefix sums, we can find the the factor by which $$$A_i$$$ increases, and monotonic stacks have a linear time complexity, so the solution is done in $$$O(n)$$$.
I don't see the intuition for Knapsack DP as the problem isn't looking for a subset, but maybe you're on to something.
Hii
i have used this approach first am i wrong because it leads to int overflow and would turn negative?
my reasoning was that if we can find a max number which a[i]*pow(2,2supto[i]) and add prefix where all numbers would be reduced to their max _odd and suffix would be unchanged
https://codeforces.me/contest/2035/submission/288358437
C is the hardest div2C in history. Even harder than D2 from the last div2 round (rated 2187 on CList).
what is CList?
https://clist.by/problems/
I took a look, and it doesn't seem that difficult; it only has a score of 1322.
You, sir, have shared a resource of major importance ! I will definitely use it in the future !
i found it easy
What? It shows only 1322 for me. Maybe it wasn't properly updated when you saw it?
He means the last div2 D2 was 2187
Oh, I get it now. I thought they meant it is rated higher than that 2187. Turns out that they 'felt' it's harder than that.
Yo, i've just completed tutorial for problem D in Hackmd : link
Sorry for not directly publishing tutorial in codeforces, i prefer using hackmd :)
First time getting upvoted, thanks a lot !
C odd case: since n%2==1 then l will always be 1 so it is easier to just set the last 4 to 1 3 n-1,n
Please tell me , in A why they are not using the formulae which is given . And also why moves of first element is counted as (m+1) but when i find distance using formulae i am getting (m).
Can someone share their thought process of D like how and why it works
you have to collect all the 2's in terms of powers and give them to the best successor possible, then calculate the sum of the remaining odds which are not considered
Ex1:1 6 5
for this, you have to give all the 2 powers to 5 which is better over 3
Ex2:4 1 3
same as above but the format is different, first you will accumulate at 1 then accumulate at 3
Ex3:4 7 2 2 2
Here for the first 2 elements, 4 will be accumulated into 7
For 3 elements, since there is no benefit of sending 4 to 2 over 7, we leave it unchanged
For 4 elements, first, we will accumulate the previous 2 and then compare 4 with 7, since 7 is bigger, the powers remain unchanged
For 5 elements, we will accumulate past 2 twos to get 8, which is in turn greater than 7, so we will send the 4 to 8 making it 32
thus we will be storing all the accumulated indices and powers in a stack, all the odds remaining after removing powers of 2 in other variables unaccumulated_odds, whenever we get a new element we try to pop elements in the stack, and try to accumulate values at new element same as shown in example 3
think like this . from a given i to previous 31 even numbers you will have a number with or without shifting 2's greater than 1e9 . since every ai is bounded by 1e9 . all other even numbers before that will lose thier 2's to largest element
Can E be solved by simulated annealing?I submitted serveral times, but no luck.288417811.This is the first time I wrote simulated annealing, so it may come with some mistakes. So I wonder whether it can be solved by such a approach or not.
I also tried but failed ;( multiple-test aren't so easy for SA to pass.
No way in hell I can ever "notice" what the editorial tells us to notice in problem D.
Solved it by thinking about where to utilize the 2s in between the last number we have given 2s to and the current number i. If we can use these 2s to make a[i] greater than the last number we gave all the 2s, then we should give all the twos we gave the previous number to this a[i]. Else we re-use the answer from the last number we gave all the twos and give all the twos in between to a[i]. To maintain the last number we gave all the 2s we can use monotonic stack. Submission: https://codeforces.me/contest/2035/submission/288418798
Actually, I did notice it after thinking about it for a while. Not bad. Wouldn't have ever come up with it myself though. Thanks for the editorial.
you can solve E using ternary search too.
talk is cheap, share code.
288768348
runs in Clogz.
C = 100 works.
Idea: if you replace ceiling division in the formula with normal division, then the function can be simplified down to pA + t/A, where p and t are arbitrary constants. This is a valley array, and the valley can be found with ternary search. Then just search a bit to the side of that valley to account for any inaccuracies because of the division change.
I can't really prove the horizontal search, if anyone could share a counter case that would be cool.
Also side note: I actually never had all the details ironed out before I started coding it, so ig papa-ka-para was right. upvoted :)
nice one. although, I tried a lot during contest to pass ternary search. but didn't work.
Lol no. You can maybe pass the current testset if you are lucky but it's not "solving".
Bro C was just too difficult for me.
for C,
we can also approach it greedily
, observe ifn is odd
then last operation is and so we can get to less than or equal to last operation, so we choose last to be n and second last to be n-1 as(n &(n-1)) = (n-1)
, but what if we try to make the first bit of (n-1) on? then we can make it (n), but how to do it? observe if we place3, 1 before it we would get the first bit ON
which would be carried forward to get themaximum value as n.
Also in the case ofn if it is not a power of two
, n would have a bit which isn't present in the sequence so we can greedily place it at the end, and have a permutation such as 1, 2, 3..., n. Rest is nicely explained in the editorial, thanks for it!Submission link: https://pastebin.com/x0gwHdeL
Am I the only one who found B to be harder than C and D? (and D easier than C)
In D, does the solution described for the entire array ("We iterate backwards on the array [...] is the largest, we do nothing") not work on
11 1 6 9 4 7 4 4 10 3 2 3
in sample? I get 1313 when it should be 1568
When checking if an element a is the largest, you need to check it after removing its powers of 2. In the example, when it gets to the number 10, the largest is already 2*3=6, so it turns the largest into 10, even though if you were to add the 2 from 10 to 6, you would get 12, aka a bigger result. So what it should be doing instead is checking if 5 is larger than 6, and since its not, it will then multiply 6 by 2.
If its still not clear, you can consider the times a number can be divided by 2 as its potential. This potential can go to the base of the current number (the current number/its potential), or to the largest number calculated before. So, returning to the example, the 8th element has potential 2, and base 5. If I give the current number the potential, I end up with 10, but if I give it to the largest number (which equals 6), I will end up with 12.
One more way to solve F is (same observation(and logic) but different implementation) :
Do a normal binary search on answer , but this time instead of checking for only mid , check from mid-2*n+1 , mid , even if one turns out to be true(say for num) , hi = num-1 else lo = mid+1;
I came up with the same thing but got TLE.
https://codeforces.me/contest/2035/submission/288440877
Maybe it's a random pass, they should have given n<=1000
For D solution, why 31? 2 divisors
can someone help me in proving my solution for C? I brute forced some cases of $$$ 2, 1, 3, 4, .... , n $$$ and noted that we get the required $$$ k $$$ somewhere always. based on this, my solution is just $$$ x + 1, x + 2, x + 3, ......, n, 2, 1, 3, ....., x $$$ for some x such that the resultant $$$ k $$$ for $$$ 2, 1, 3, ... x $$$ is maximum and $$$ n - x $$$ is even. I cannot formally prove why the sequence $$$ 2, 1, 3, 4, ... x $$$ gives the best k always.
Submission
I just don't know why $$$O(n^2\log{a_i})$$$ does not fit in 4 seconds in problem F.
UPD: It seems to be my problem. I was submitting in G++ 17 instead of G++ 20, which is based on 32-bit architecture.
We knew the TL was tight for some people but were also aware of a very fast N^3 solution and wanted to cut it.
In such cases, it might be better to modify the constraints a bit.
For example, you could use
0 <= a_i <= 10^5
instead of0 <= a_i <= 10^9
.I guess that would have made sense. However, there were a lot of optimizations that where any of them could make it pass (although unoptimized passed comfortably for many people).
I am very bad at solving problem like B. Can anyone tell me what topic do i have to learn to solve this kind of problem. Thanks in advance.
I am confused, in problem D , we can have $$$a_i$$$ > $$$a_j$$$ but it is more optimal to divide $$$a_i$$$ by 2 and multiply $$$a_j$$$ by 2
example: 20 and 9
20 + 9 = 29
but we can do: 20 / 2 / 2 = 5
and 9 × 2 × 2 = 36
and the sum becomes 41 > 29
what am I missing here ?
Consider this case in a "distribute the $$$2$$$ factors" manner, now we have $$$20=5\times 2^2$$$, and it's pretty clear that giving the $$$2$$$ factors to $$$9$$$ is better than to $$$5$$$.
what do you mean by "distribute the 2 factors" ? do you mean reducing all numbers to their odd form (devide them all by 2 untill they become odd) Edit: yeah I got it now. it wasnt mentioned in the editorial though. Thanks!
However, E can be solve in $$$O(z^{3/4})$$$ , only by using some optimizations in brute force. My Code
Please explain the logic you used for optimization
Every time increase $$$d$$$ by $$$k$$$ and use a second operation, it will be end in at most $$$O(\sqrt{\dfrac zk})$$$ times.
If you choose to increase $$$d$$$ by $$$i$$$ and always choose the second operation from then on, the additional cost is $$$ix+\lceil\dfrac z{d+i}\rceil y$$$ . You can find that there are at most $$$O(\sqrt z)$$$ different $$$\lceil\dfrac z{d+i}\rceil$$$ and always use the smallest $$$i$$$ with the same $$$\lceil\dfrac z{d+i}\rceil$$$ will be better. So now is $$$O(\sqrt{\dfrac zk}\min(\sqrt z,k))$$$ .
If $$$\sqrt z\le k$$$ , then $$$O(\sqrt{\dfrac zk}\cdot\sqrt z)=O(\dfrac z{\sqrt k})\le O(z^{\frac 34})$$$ ;
If $$$\sqrt z>k$$$ ,then $$$O(\sqrt{\dfrac zk}\cdot k)=O(\sqrt{zk})<O(z^{\frac 34})$$$ .
Hori
In the editorial of E problem, you have mentioned this statement.
a * b >= 2 * z
, this implies thatmin(a,b) <= sqrt(2*z)
? Why is that ? How did we reach there ?Just to replace few values, lets take a = 5 , b = 5, and z = 5, then
a * b >= 2 * z
holds true, butmin(a,b) <= sqrt(2 * z)
doesn't hold true.Please can you elaborate a little here ?
When
z=5
anda=5
,b=2
satisfies the inequality. Therefore,min(a, b) <= sqrt(2*z)
is still satisfied. While there are definitely, other solutions,b=5
is one as you correctly point out, they are simply worse than the solution we just found as they cost more.is system testing done till this round !!?
long back bruh.
For solution E, I don't understand how we got $$$\dfrac{a\cdot b}{2} \ge z$$$
I think Problem B very Medium hmmm
In G1 editorial, while counting cnt(i,j), it should the case that j < m < i and not the other way right?
Could you please help me, what do I do if it is showing "N/A" in the solution Code page? Thank You
Can we solve D using priority queue. We store pair of {lowest value a[prev] can get, a[prev]} and check if a[i]>pair.first then we can perform the operation and pop the pair. We will then maintain answer vector where ans[i] += a[i-1] — pair.second + pair.first + a[i] * (pair.second/pair.first)
I can't understand C's solution,could someone help explain the principles clearly,it's too abstract for me.
Hori, the code is not available