Hi Codeforces!
I am glad to announce and invite you to Codeforces Round 689 (Div. 2, based on Zed Code Competition), which will be held on Dec/11/2020 17:35 (Moscow time).
We want to offer you to solve 6 problems taken from the Zed Code Competition 2020, which was held as part of the Adapt by Zed conference powered by EPAM Systems.
This round will be rated for the participants with rating lower than 2100.
- The authors of the round are: Aliaksandr AleXman111 Kryvasheyeu, Aliaksandr sdryapko Drapko, Aliaksei netman Vistiazh and me.
- The round was coordinated by Vladimir vovuh Petrov and Nikolay KAN Kalinin.
- The round was tested by: awoo, mohammedehab2002, thenymphsofdelphi, _overrated_, BRCode, morzer, Retired_cherry, mesanu, SlavicG, Shinchan01, manik.jain, Coder-HVZ, Rox, hloya_ygrt, 244mhq, Wind_Eagle, PuRpLe_FoReVeR, Gassa, namanbansal013, saurabhyadavz and andrew.
- And, of course, thank you very much Mike MikeMirzayanov Mirzayanov for creating the polygon and codeforces platforms.
Thanks a lot for your contribution to the preparation of the round! Good luck with the competition!
UPD: The scoring distribution is 500 — 1000 — 1250 — 1500 — 2250 — 2750.
Congratulations to the winners of div 2:
as well as div 1 winners:
Thank you all for participating! (editorial)
As a tester, 1 in 7 children does not have enough food to lead a happy and active life, but what does it take to end global hunger? Participate in Codeforces Round #689.
Feel Good to you and your friend as a first time tester Sir !!
It do feel good to me
As a tester, my name is not mentioned in the tester's list. ⊙︿⊙
Edit — Now I'm on it.
people thought you were lying but in fact, you are not. Now all of us should give you an upvote.
Where is score distribution??
The scoring distribution is 500 — 1000 — 1250 — 1500 — 2250 — 2750.
yes, after i write comment scoring distribution, updated :)
As a tester, I love Zebras!
Same here!! And I can say after the contest everyone will love zebra.
Zebra themed contest would be nice for loving zebra. Edit: I hate zebras :D
your profile picture is interesting
Hope it doesn't end up like Pony's contest
As a contestant, me too :D
Who's idea? btw I'm neutral towards zebra but the photos were good and calming to look at. Thanks
I think it will be a good contest!
Good luck to everyone!
As a tester, wishing you all Happy Rating :)
Isn't rating a zero sum game? Although, I'm not entirely sure what the method to compute rating delta is...
No, it's not. It's a negative sum game.
BRCode Will there be 3b1b styled tutorials xD?! Im aware it must be really hard to make but they're really great BTW don't stop making them.
Hi, sorry I was just testing this round — there won't be any video editorials for it.
However I am making video editorials for a contest later this month, so look out for that :p
Can't wait for it ;)
[copyright striked by Digon]
As a plagiarism hater, stop copying my son's comments every now and then
I will be glad to participate and test my knowledge
The unseen bug is the deadliest
no meme for this announcement?
10x deadlier when combined with weak pretests.
Good luck everyone ❤️
As a tester, I want to wish you good luck and high rating!
Unfortunately, I don't see your name in the testers list
Unfortunately, I don't know what were you looking at :)
Unfortunately, then there are four possibilities
1.You're from a different world line and you haven't tested this round in this world line (according to the announcement at least)
2.Your name hasn't been added to the list given in the announcement but you tested this round
3.You never tested this round
4.You are an alt account of one of the testers and mistakenly commented from the wrong account :P.
Change language to Russian)
Yes, I'm sorry, my name is only mentioned in the Russian version.
Alright, I get it, then it's the 2nd possibility, thanks for testing this round :)
sorry, fixed :)
I suspect there's a problem that will require Z-algorithm. ;)
If there is I am going to troll them by solving it with KMP.
Now even if they planned to have such problem, they will replace it with another problem because of this comment!!!
It is five hours left. Are you sure they can replace a task in this time?
I think they should, but if above is true then it will be either a D or E problem and replacing that would be little difficult. I still think its doable since there are too many questions and rounds already in queue, they can use a similar difficulty problem from future round.
PS: After all this comment, if we still see a question on string pattern matching it can help few candidates to gain rating.
Oh, man, change a task for that reason?
Well, authors, change all the tasks please! I think there will be tasks for 2-sat, binary search, bitmasks, bruteforce, chinese remainder theorem, combinatorics, constructive algorithms, data structures, dfs and similar, divide and conquer, dp, dsu, expression parsing, fft, flows, games, geometry, graph matchings, graphs, greedy, hashing, implementation, interactive, math, matrices, meet-in-the-middle, number theory, probabilities, schedules, shortest paths, sortings, strings suffix structures, strings, ternary search, trees and two pointers.
Lol, Do you really think writing all the data structures is equivalent to the first comment?
Let say we actually have a D problem on Z-algorithm in this contest, then you don't think people seeing a substring and string pattern question will first try to relate it to Z-algorithm?
Okay if you disagree with all of my above comment, then why contestants are not allowed to comment even on data structure used in a problem during contest?? I must say sometimes knowing which data structure to use makes problem much easier to solve.
Main phrase is "during the contest". This statement about Z-function is useless, because participants don't know the tasks and all their guesses are just random.
Chill dude. Why so serious?
What is Z-algorithm?
Z-function
Is it very advanced one or recommended for div2
I think that it is recommended for div.2, also there are some similar themes for div.2, like Prefix function or String hashing.
As a tester the problems were very interesting :)
Were the problem statements short, sir?
Why everybody wants to see short statements? It is not so difficult to read legends, if they are not very long, of course.
As an author of 672 round I would like to say that some authors love interesting legends in their statements. It is such a pleasure to make tasks about your favourite films, or games, or even about your friends.
Of course, I can understand participants. Rounds require speed. But to my mind, one or two minutes spent on the legends are not impotant.
Please don't downvote me for the unpopular opinion.
Sorry. I want to see short statements because I want to see the beauty in the problems instantly. And by beauty I mean algorithmic and mathematical beauty. I don't care about the story, and even while upsolving, I become very happy to see a problem which has a short formal description but very hard to think. Most recently: 986C - AND Graph, I bookmarked this problem even after solving it, I was so happy.
Also, my dad peeks into my laptop from time to time, and if he sees me solving a problem related to zebras and cookies and Gildong the dog, he will question my career choices. That is why I always switch to some AtCoder problem (without flavour text) when he is around. :P
How do long statements relate to "algorithmic and mathematical beauty"? Legend is only a story around the mathematical construction.
If you don't like stories, you can just skip it and get the formalistic statements.
"I become very happy to see a problem which has a short formal description but very hard to think." I agree with you, of course, beautiful tasks are very nice. But why this task can suddenly become worse if there was a legend? The task remains the same, the solution remains the same.
If it is just aesthetic love to short statements... well, maybe. But I still don't understand it.
And yes, your situation with your dad is interesting, but I don't really think that this could be a real argument for other people.
However, I appreciate your opinion. "So many men, so many minds".
I don't care about the story
This might change your mind.
I'm sorry, what's wrong with this task?
I think the story is very well connected to the problem which makes it interesting.
Oh, I thought that you were against this task.
Now, I agree with you, this is a beautiful legend that fits the task pretty well. Nice example :)
"Gildong the dog"
Since when Gildong was a dog?
But some people like me are bad at English and have to use the Google Translate to read the problems,and long statements are difficult for them to understand >_<
There was a really great round 635, where statements have legends, followed by a picture. All that lies above picture is a legend, all that lies below — a statement.
This trick solves your problems with English.
Thanks for that.
But actually,**few** CF problems with long statements has a short formal description below the legends.
I strongly suggested that if the author want to make the statement long,at least he need to add a short formal description.
Besides,the long statement may has some confusing words that may lead some competitors with bad ability of understanding(like me) understand the problem in a wrong way,and wastes a lot of time.
No, I disagree. If I will have a legend and a task apart, it will look like "listen, I have a legend, but nobody cares about it so here is a formal description".
Real life tasks most commonly don't have formal statements.
What's worse,when we copy the statements to the Google Translate,the Markdown and the $$$\LaTeX$$$ would be lost.
I think one issue is some legends don't really add anything to the problem. Legends like having a city connected by roads being described by a graph or two playing a game and determining the winner if both players play optimally seem natural. Legends like someone receiving and array for their birthday and having to calculate some property of it seem a little contrived. Then there are those legends where person A proposes a problem to person B and person B can't solve it for reasons like not having enough time, being too young or being too small and now they ask you to solve it for them. There's also the other situation where person B can solve it and they want to know if you can solve it too. Some of these can be entertaining but also feel a little unnecessary.
I'm strongly agree with you! Legends like "birthday" or "B can't solve" or "can you solve it too" are so annoying! Honestly, I... well... At least I don't like this types of legends (i don't like it very, very much).
However, I was talking about good legends, without this typical words about birthdays or "A wants to solve but can't".
How did you know that mike recieved an array for his birthday lol.
What is the problem score distribution?
I hope Vladik will put me on the tester's list. ⊙︿⊙
I am sure, he will.
I think Vladik doesn't like you or maybe he is giving you an opportunity to maximize your rating.
I guess it's a trap. :)
Wow netman, was once an IGM. I thought Vladik was the highest rated at first. Looking forward to some really good math problems.
I'm trying but not getting better. I hope I will do well in today's contest.
Did you try not trying?
Taking breaks actually help.
What is the score distribution of problems ?
*Problems
The scoring distribution is 500 — 1000 — 1250 — 1500 — 2250 — 2750.
bad contest.
no you
And now I want to ask you: were the problems' statements short enough? :)
yes, found them easy enough to interpret
Anyone else found B to be a mere implementation exercise? It took me almost 2 hours, mostly because I kept telling myself that Codeforces either wants a "smart solution" or a super dumb solution. C looked cool, though.
Great contest! I thought I had registered but apparently, I did not, that cost me 10 minutes of the contest and my mistake ended in an emotional downturn the rest of the time :(
I hate Probability.
How to solve C ?
Find such maximal position X that for all i > X it is true that a[i] = i.
Then just calculate the answer with a well-known formula:
ans = 1
if (r[i] >= X) ans = ans * (1 — p[i])
Answer is 1 — ans.
Here is how I thought about it : First we find the first index such that the suffix of the array is sorted. Now inorder to make the array not sorted after m operations, the elements at index = (first_index-1 to n) should not be sorted. Then just do 1-x as we have to find winning probability. Idk why it fails on pretest 2.
OMFG i just found out the mistake, I should have initialized ind=m and not m+1 :(
Well first find the length of the largest suffix of a that is sorted in ascending. Say it has length len. Then let x = n-len. Clearly, if any operation (r, p) has r >= x, the whole array is sorted. Lets say we have opi1, opi2, ... opik where 1 <= i1 < i2 < ... < ik <= m, such that the previous condition is satisfied. picking any combination of these operations to go through to sort the corresponding prefix of array a will suffice to sort the whole of array a. Also an interesting thing is the probability of this occuring is exactly 1 minus the probability of the complement, that is, none of these operations going through. None of the other operations opi matter if it doesn't belong to the previously said list. Now the probability of this compliment, is just the product of (1-pi1)(1-pi2)...(1-pik). As I already said, ans is 1 minust that. One corner case is if array is sorted to begin with, in which case the answer is just 1.0. (As ops don't affect array)
I did the same thing, I was a bit confused about the corner case first. So here's a thought : If some prefix of the array is already sorted given that (prefix_index>first_index where suffix is sorted) then we should not count it in finding the winning probability as no matter we include it or not the array will always remain sorted! This logic handles the corner case well and also makes more sense. However we count it's contribution to the final answer, why is that?
Because in the given corner, if we use the same algo logic, we'll be ignoring the case where, even if all those ops with r >= x don't go through, the array remains sorted. Which means, the product that we subtract from 1 normally, isn't valid in this case. That's literally the only difference between the ans of the algo and the way we handle the algo.
Could not understand question C. Can anyone help?
Note that once the array is sorted it never gets unsorted, by definition of the problem.
Then, if a postfix of len x is sorted, the array can get sorted in each experiment if $$$r[i]+x>=n$$$
In this case it stays unsorted with prob $$$1-p[i]$$$
So we need to multiply all these numbers to get the probabilty that the array is still unsorted after last experiment. Then ans equals 1-prop.
Got it now. Thank you!
Give you a permutation.
Then give you $$$m$$$ operations like $$$(r_{i},p_{i})$$$. It means in this operation, It has $$$p_{i}$$$ probability to let the first $$$r_{i}$$$ numbers in the permutaion be sorted.
You task is to find the possibility of permutations to be sorted.
What may be
?
test 13 of E is probably some overflows on long long
Can anyone pls explain the solution of problem B...? :(
Since you can easily get the solution later. Here's a hint, try searching for problem "Maximal square in a rectangular matrix having 1" and try to use the same approach. I used same.
How to solve E?
My solution breaks it into two cases, although maybe someone else can provide a more concise solution.
I won't detail the exact formulas, but you can see them in my submission.
Case 1: $$$x \geq y$$$
The overall value of $$$k$$$ diminishes over time, so we should greedily add $$$y$$$ whenever possible to prolong the time the cooler amount stays above $$$l$$$. Initially, for some number of days, we won't be able to add $$$y$$$ without overflowing past $$$r$$$, so our cooler amount decreases by $$$x$$$ for each of those days. After a certain point, we will always be able to add $$$y$$$ without overflowing, so our cooler amount decreases by $$$x - y$$$ for the remaining days. We can calculate all this information in $$$\mathcal O(1)$$$ with some math.
Case 2: $$$x < y$$$
Because $$$y$$$ is greater, we can always first let the cooler drain at a rate of $$$x$$$ for as many days as possible, before adding $$$y$$$, then repeating. This is optimal since we're only refilling when absolutely necessary, so we don't risk overflowing $$$r$$$. There might be an $$$\mathcal O(1)$$$ math formula for this, but I'm not sure. I did $$$\mathcal O(x)$$$: whenever we add $$$y$$$, it's because we can no longer drain any more $$$x$$$, so $$$0 \leq k - l < x$$$. We can thus treat this as a graph that cycles through nodes $$$0 \dots x - 1$$$, and if we ever encounter a cycle, we know that going through this cycle will sustain the cooler capacity for the entire $$$t$$$ days.
I know that wasn't the clearest explanation (I found it a bit difficult to put into words), but hopefully the code can clear up any doubts. https://codeforces.me/contest/1461/submission/100945988
Thanks a lot for writing such a detailed explanation Wiz. :)
These error involing precision always frighten me. Solved D,E but could not C even after many attempts. Can anyone help what is wrong with my code? https://codeforces.me/contest/1461/submission/100926909
Try using
long double
Edit: nvm, see comment below
Actually, you are not reading all the input for a test case and returning early.
how to solve E??? Can't even know how to approach...
I was implementing an inclusion-exclusion for c but kept getting tle on pretest 10.
Inclusion-Exclusion is O(2^n). For example,
| A U B U C| = |A| + |B| + |C| — |A ∩ B| — |B ∩ C| — |C ∩ A| + |A ∩ B ∩ C|
Inclusion exclusion wasn't needed. 1-p would be the product of individual (1-pi's) provided the part after position ri is sorted From that p can be computed
I have used inclusion-exclusion as well. But I have used dp for calculating it.
Is it just me or was C too easy? B felt harder to implement.
My approach for C, If you know array after idx is sorted, then any operation that operates on idx or above will completely sort the array. So probability of atleast one such operation = 1 — probability no such operations happen.
with same approach, why my code fails
https://codeforces.me/contest/1461/submission/100926909
please help to find error.
I did the same stupid mistake, you terminate the function for maxp == 0 and don't read the remaining input
Ah Yes!! I thought about this explicitly :) Lucky me.
You need to sort probabilities by prefix length.
You need to ignore all probabilities which to short prefix length.
I think D is well-known. upd: Divide and Conquer DP
what is it, I wrote a strange recursion which might fail if someone hacks
Can you please provide a reference? Thanks.
is it, can you please share the link if possible?
What was the logic for problem B
Just implementation (you can use prefix sums as well).
How to do it using the prefix sum?
Well... Just as said: you bruteforce the peak of the tree (the highest *), then just check if there are all elements are * on the lower layers (prefix sums can done it O(1), which is fast). So, in total we have O(N^2*M).
100962567 O(N^2*M) gave TLE in B.
Let's define
dp[i][j]
as a number of spruces where(i, j)
is the top. Now you can calculatedp
table iterating from bottom to the top in the following way:The answer to the problem will be the sum of all values in the
dp
table.Intersting solution... Do you have a proof for it?
I would like to think of this as:
dp[i][j]
is the maximum height of the spruce where(i, j)
is the top, instead of the number of spruces.It's obvious that, looking at
(i, j)
alone, if the maximum height isdp[i][j]
, there aredp[i][j]
possible spruces starting from(i, j)
.Now, to have the tree with height
dp[i][j]
, you need the next layer, which aredp[i + 1][j - 1]
,dp[i + 1][j]
,dp[i + 1][j + 1]
, to have the same height ofdp[i][j] - 1
. That's why you take the minimum of them, and add 1.I'm not sure if this helps, but this is my perspective on the solution.
This picture is not to scale but can help. Assume there are cells bellow as well.
Amazing solution with better time complexity
Could someone tell why my problem C code was giving TLE .
You are not getting whole input correctly
thanks , ig Makise_Kurisu is pointing the same below .
Did the exact same mistake -_-
Am I the only one who wasted 30 minutes on C and then realizing that I took forced the solution to the next test-case without reading the queries for that problem
PS: spookywooky can relate :(
that costed me 2 penalties
yeah me too :(
We are all beginners ;)
LGMs left the chat.
How to optimize B ?
Use prefix sums.
Some one please debug my code for Q4 100952421
Just a small hint: to prevent bugs in binary search you can try to use lower_bound() and upper_bound()
It fails in pretest7 till then it was all good
My code was failing too in pretest 7. For me, it was an integer overflow problem and changing int to long long int fixed the code.
Is there any other way than dp for B?
B is not dp
B is in my opinion a spin off of a classic dp problem Maximal square If you this problem, I guess coming up with dp solution would be much more easy
i know a way using manacher algorithm and two pointer in $$$O(n*m)$$$ worst case
Could you tell me about it please?
here ,
Basically i calculate horizontal radius for each cell, using manacher algorithm for each row, by putting $$$ * $$$ to $$$ 1 $$$ and rest all $$$ . $$$ to different values >1 , and then i applied column-wise two pointer, so we could get in n * m time.
I used a simple algorithm for B
Basically, build it from the bottom (after checking for valid indices, etc.) :
Reference
Edit: Perhaps that qualifies as DP (not sure)
Very nice problemset!!! XD
B is very similar to https://codeforces.me/contest/1393/problem/D
Nah that problem D was 50x more cancer than this problem B :P. On this problem B they were generous enough to let you bash.
If problem D is cancer you did it wrong...
How to solve D ?
I believe segment tree should do it, just sort the array beforehand. For some reason I could not get the second test in the given pretest right, so I didn't submit. But still it seems relatively simple. Correct me if I'm wrong.
The method I thought of is instead of getting the traditional tm = (tl + tr) / 2, we use binary search to find the first index greater than our mid element.
And finally, you insert the sum from tree[v] into some set. In the end for each query if you can find it in the set it's possible, otherwise not.
Just a simple "merge sort"-like recursion will do it all.
I thought that only , but if we divide the array into two parts , and sum of both left and right is greater than sum required then we have to check in both half . So wouldn't that take O(n) per query ?
You need to precompute all the possible sums using divide and conquer. Then ,for each query, you just need to check if the required value exists or not (using a Map,for example).
Is'nt there a possiblity of exceeding memory limit in case the size of map or set becomes very big?
The maximum number of sums that we can get should be in the order of $$$O(n)$$$ i think.
It can be proved by induction that the number of all possible answers won't exceed n.
Just calculate sum of both left and right array and store it in set. After this call for next two halves... You can traverse the array for sum calculation or use prefix sum also..
Then for each query check the no. in set and print accordingly...
We devide the array in all possible subarrays, and store the sum for each one somewhere.
Then while queries we can "directly" give the answer in log n.
No, actually it is much faster than it looks like.
It is easy to see that the maximum depth of recursion is $$$log(n)$$$ because each time we are splitting the array into two halves.
In each level we are doing at most $$$n$$$ operations, by summing up each part of the array.
So that's a total time complexity of $$$O(n*log(n))$$$.
Bonus: we can actually do the summing part much faster by calculating the sum of the desired interval from the sums of its two halves, so that's $$$O(n)$$$ time for the whole recursion instead of $$$O(n*log(n))$$$, but the total time complexity will remain $$$O(n*log(n))$$$ because of the sorting we are doing at the beginning.
you can precompute all the possible answers in nlog(n) , then you can answer each query in O(1) time . giving a total complexity of O(nlog(n) + q)
Do exactly what the question says. I literally had a vector in a queue and pushed left vector and right vector into the queue and continued doing so until vector size is 1.
Basically I calculated all sums possible from a given vector, the complexity would only be O(nlogn)
Does N*M*M/2 pass in B?
N*M*M will also pass .
100956092 didn't pass.
Map is not O(1) it's log(n) . So your complexity is $$$O(n^3log(n))$$$
but it will be 500*500*500 == 125000000 can we ... is it ok ? it little less than 10^9
It is 10^9/8, and note that it is actually less than 500^3 since at least one dimension get halved... so we end with less than 10^8.
I think yes, if you have good implementation.
Hey, my submission is plain vector implementation man, didn't pass sys testing man. Might cost me CM rip. O(N*M*min(N,M)).
Use C arrays instead of std::vector where possible.
And also there was much more simple solution. As I know, authors have a Python solution in 0.28 s.
Damn. Time to learn from mistakes.
I solved A in 2 min and then struggled with B for the rest of the competition, can somebody tell me CLEARLY how to solve B ? (I know it is just implementation, but how ?)
The values of r may not be necessarily distinct in problem C (:
So which value should we consider?
what I meant to say was that you cannot store all of the values and then compute the answer, because at some point you might overwrite the previous values if the same value of r comes more than once in the given operations.
Oh, that's why my solution is failing. Thanks :]
How to implement solution of B?
you can refer mine 100926055.
dp[i][j] in my solution denotes largest length of spruce that can be made from the coordinate (i,j).
Hope it helps!
we will use DP! lets call maximum possible
k
for a tree rooted at(i, j)
,mem[i][j]
.lets call
c[i][j]
the minimum number of consecutive*
's from left of the cell or from right of it (including the cell). this can be found using partial (prefix) sum and binary search inO(log(N))
.Then the maximum possible
k
for the cell bellow(i, j)
((i+1, j)
), ismin(c[i+1][j], mem[i][j]+1)
, because:mem[i][j]
needs to be at leastk-1
formem[i][j]
to be k.k
tree has2k + 1
*
's at the base.Then the answer is just sum of all the
mem[i][j]
s.Submission using this principle.
The system test for E is weak. I hack someone with the following test after the system test:
31 2 48 27 3 2
Yes and this solution 100938127 only checked 82 Tests while your hack is 83rd
:sob:
Interesting problemset!
weak pretests :/
what sould be the time complexity for problem B? will N^3 solution will pass??
n^3 will pass
but it will be 500*500*500 == 125000000 can we ... how is it ok ? its little less than 10^9
If the constant factor is low, it'll pass. for 1 sec the number of operations is generally around 4*1e8 from what i know. My n^3 passed with 234 ms
there is a lot of cases that the solution will break in the third loop ex: in the worst case the third loop in row 1 will iterate m/2 in the next row it will iterate m/2 -1 and so on and if in any case, you can't make the current spruce tree you will break so this will make the time is much less
in most contests,
- O(n^3) for n <= 500,
- O(n^2) for n <= 3000,
- O(n) for n <= 200000,
- O(nlogn) for n <= 1000000
will pass.
I am getting WA on test case 20 in problem C (I think it is about precision). Can someone help?
My submission
I had same issue, changing float to double worked for me. edit: I just ran your code with the change it worked.
Use setprecision and doubles
I liked the pictures in the questions, but for QB, adding a pragma after comp ends up ACing.WHY!!!!??? :(
Can someone please point out what is wrong in my solution for problem C. It got fst on testc-case 25
float
has very limited precision.damn! I thought I was working with long doubles all this time, why in the world did I write
#define ld float
after writing#define ld long double
:((((. I am the dumbest person who ever existed on this planet.I may have found F interesting, only if s was always "+*".
Why does C fail if we use float instead of double? 100933790
float
cannot keep the precision enoughly, whiledouble
can.I see. Thanks!
what is limit for memory allocation in C++ during runtime?
my solution for B failed system test because i was allocating
(3*500*500)
memory at runtime in worst case scenario.the runtime is for writing out of the memory.
thanks, i didn't account for the null character. :(.
Because you use
char s[n][m]
, you forgot the place for '\0'. Btw, use VLA and put big arrays on the stack is a bad habit and may cause stack overflow.thanks, i understand now that was a shitty mistake and a bad habit.
I haven't solved E but I know a TestCase where 1 solution fails I know I can't hack it but is there a way to hack now for somebody else or it isn't possible after system testing.
You need to be in
Div.1
(above $$$1900$$$) in order to uphackThere are 90 TestCases in E but only 82 are being checked for 100938127 I don't know-how is that possible
100962567 In spite of using 3 loops(O(N*N*M)) in B, my code gave Time Limit Exceeded on main test case 4.
Same here man, tough luck.
I think the B is bad. I spent 15 minutes thinking about how to improve my $$$O(500^{3})$$$ brute-force because 1e8 is dangerous. However, in fact $$$O(500^{3})$$$ can pass it easily. So, what's the meaning of the B? Just for testing Brute-force?
If so,that is obviously unresonable.
First Hand candidate who failed B on O(N*M*min(N,M)). You did a good job optimising chill.
Problems B and D were about justifying to yourself why brute force should work :P
x
List of ideas that solve F: 100956230
s (when sorted) is one of {"+", "-", "*", "*+", "*-", "+-", "*+-"} (2**3-1==7 cases)
For {"+", "-", "*"} there's only one choice. "+-" is same as "+" (replacing every '-' to '+' doesn't decrease answer)
If there are no zeroes, then answer=a[0]*a[1]*...*a[n-1] If there is a zero (at first position first0), then answer=a[0]*...*a[first0-1]-a_[first0]*...*a_[n-1].
Case first0==0 is "somewhat" special (don't attempt to output leading '-' in -a[0]*a[1]*...*a[n-1])
Similar to "+-" case, we can replace every — with + and don't decrease answer, so there are only 2 options for each operation: * or +.
if a[idx] == 0, then the answer is in form a[0] op[0] a[1] op[1] ... op[idx-2] a[idx-1] + a[idx] + a[idx+1] op[idx+1] ... a[n-1], so we can solve original task for subarrays of a that don't contain 0
Now we handle some subarray a[L:R], 0<=L<R<=n-1,
If a[L]==1, then the answer should be a[L] + a[L+1] op[L+1] ... a[R] Similarly, if a[R]==1: answer should be a[L] op[L] ... + a[R]
So now, suppose a[L]!=1, a[R]!=1
Then answer is a[L] * ... * a[R]. How many values >= 2 do we need? log2(10**5) ~= 17, lets be safe and take 19. If there are >= 19 i-s, with a[i] > 1, we're done.
Now we're left with this situation: a[i]>=1, a[L],a[R]>=1, and at most 18 a[i]>1.
Notice that our array doesn't contain much information (many values == 1), so it seems natural to compress long runs of 1 with run-length-encoding: [a[i+0]==1, a[i+1]==1, ..., a[i+(k-1)]==1] === [1] * k. This "compression" works because to attain max value we need to have either " + a[i] + ... + a[i+(k-1)] + " OR " * a[i] * a[i+1] *... *a[i+(k-1)] * " to achieve maximum value.
Also note that, that now we have at most 2*18-1==37 different "groups" (each "group" corresponding to either a[i] >=2, or to a subarray of all ones: a[L:R].
Now we're ready to solve final reformulation of our problem: there are <=37 "groups" of either a[i] >=2 or a[L:R] == [1] * (R-L+1).
Note that the answer for our subarray will not exceed 9**18 (which fits in long long).
We do this with a dynamic programming: dp[i] is the max value that we get using i first groups. dp[0]=0, and dp[i]=max(j<i|dp[i]+ [product of a-s that correspond to segment defined by groups [i:j)]). Don't forget the case when i-1 st group is a run of k ones. Then, we need to also take dp[i]=max(dp[i], dp[i-1]+k);
Max possible value is in dp[n]. To restore the answer, add a second int, so dp[i] becomes {max_expression_value, j that achieves optimal value}. We have O(37) states and O(37) transitions for each states, so O(37**2) in total.
That's it! Running time is O(n*37)=O(n).
Thank you, I got all these ideas except the last one with dp, now I know what I missed.
In the 10th nested spoiler :woozy_face:
Thank you very much, I managed to get back to green after almost a year!
For problem D, I used the following solution: 1. sort a. 2. compute prefix sum of a for efficient range sum operations 3. for a given sum s, call the helper method check on the entire range of a.
In this helper method check, do exactly what a split operation allows us to do: 1. split by middle value midV; 2. do a binary search to find the right most index midIdx such that a[midIdx] <= midV. 3. then check the range sum of [l, midIdx], [midxIdx + 1, r]. If one of them equals sum, return true; if both of them are smaller than sum, return false; if they are bigger than sum, recursively check on that range.
The terminating condition of this check method is: 1. l > r, left index is bigger than right index; 2. the range sum of the current range is < sum; 3. the range sum of the current range is > sum, a[l] == a[r], meaning we can not split anymore. 4. if the range sum is sum, simply return true.
My submission is 100978064
I keep getting the stack overflow error on one of the recursive call but I can't figure out why my terminating condition is incorrect. Can some one help me with this? Thanks!
I made a stupid mistake. In the c problem, when the array was originally ordered, I directly continued, ignoring the following input. I submitted this question for the first time at 00:51 and passed it in the last ten seconds. It is because of this stupid mistake. So I have an idea, can someone sum up some common stupid mistakes and share them so that when the solution fails to pass, we can first check whether they have made these stupid mistakes (if no one does this, maybe I will Do it).
Finally master! Thanks for very interesting problems and i got a little distance from winners!
I recieved a system message about my submission for Problem D coincides with other's submission. But I always use the same FastIO module which can be proved by my previous submission. I don't understand why it's considered a violation of the rules this time