Tutorial is loading...
Prepared by Vladik.
Tutorial is loading...
Prepared by AleXman111.
Tutorial is loading...
Prepared by netman.
Tutorial is loading...
Prepared by Vladik.
Tutorial is loading...
Prepared by AleXman111.
Tutorial is loading...
Prepared by netman.
Tutorial is loading...
Prepared by Vladik.
Tutorial is loading...
Prepared by 74TrAkToR.
purplecrayon didnt comment, scary
Not trying to be hateful towards the setters and the testers who put a lot of effort for this round to be as good as it was, but the pretests for Problem A were not so great. they didn't test the boundaries of the constraints thus causing many people to get time limit exceeded on the main system tests.
Other than that, great round!
can you please elaborate more?
It is shown in the problem that the maximum limit for m is 10^9, but none of the cases in the pretests have m set to its maximum value(10^9), so many solutions passed the pretests but failed when it came to the main tests (as they have cases where m = 10^9)
If you put every possible combination in pretests, what's even the point of having system tests?
Of course not every possible combination should be in the pretests, but the basic corner cases for the problem should(Ex: minimum constraints, maximum constraints, unusal input, special cases...) which is what makes pretests strong.
So you want to say that pretests are present to just scam people ? Just think before you type something :/
Pretests exist to check for corner cases and smaller-sized inputs, that's it. I didn't imply anything about scamming people, think carefully before you accuse someone of something.
Bruh, just stop saying crap.
Follow your own advice
That explains your contribution.
To everyone who's downvoting, atleast let me know what's the point of having system tests?
pretests are present because of the limited capapcity of the servers. On the time of the contest duration if you have a lot of tests, then the judge might just give up due to immmense load of submmissions. Hence pretests are a subsets of the actual tests that make sure your solution is correct but at the same time with much less load so that the server can support fast results.
So how should system tests be designed such that when a code passed pretests but fails system tests, people don't say that the pretests are weak?
I agree that the pretest is weak, but don't people see the constraint for m being abnormally huge? or maybe they forgot to look closely since they are doing speedforces?
I agree, some people may lack concentration when focusing on speed.
Maybe unpopular opinion but I think that weak pretests don't necessarily lower the quality of the round.
yeah but it lowers your dck. Motherfcker stop giving your shit opinions. Asshole. Will choke you with big D... .
But the data of Problem D is so annoying!
.
.-. if you like I had a better O(n) solution: 117940985
In problem E, if the test case is just 4, 2. Then the good permutations are- {{1, 2}, {1, 3, 2}, {1, 3, 4}, {1, 4, 2}, {1, 4, 3}, {2, 1}, {2, 3}, {2, 4, 1}, {2, 4, 3}, {3, 1, 2}, {3, 1, 4}, {3, 2}, {3, 4}, {4, 1, 2}, {4, 1, 3}, {4, 2, 1}, {4, 2, 3}, {4, 3}} making a total of 18 and sum of switched on lights over all combinations is 48. Then shouldn't the answer be 48/18, but I checked it from others code and it is not correct. I am sure I am doing some blunder, can anybody please help.
Shitt!! These events are not equiprobable. I just multiplied by their probabilities and it ACed. Sadly I was debugging this for 1hr in the contest. Sorry for my silly question.
We apologize to all the participants who did not find some of the statements completely clear :(
We tried to answer all your questions as quickly as possible and minimize the impact of the not immediately clear tasks' statements.
We will do our best to make our next round better.
yup,,,,good job
your previous round was great
what happened this time?
It's hard for me to find an answer to your question. Indeed, I have invested much more myself in this round than in any of the other rounds that I have authored. I really wanted to do something awesome that the community would appreciate, but at some stage, something seemed to go wrong.
I will personally do my best to make the next Deltix round amazing, even if I am not the author of it.
What would you say about FSTs on D even after 37 pretest :/(I lost a chance to purple cuz of that :sob:)
I looked at your solution. It seems that you are using some kind of greedy algorithm. While preparing this problem, I wrote all the greedy algorithms that I could come up with and they all did not pass pretests. I am sorry for your unpleasant experience.
Yes, I agree. Tbh I wasnt even expecting it to pass on my own either. But the fact it passed pretests was very surprising to me as well. Anyways its just part of game. Thanks for the contest. Hoping to see u with another set of bugaboos soon.:)
Thank you, glad to hear something positive
The TL in problem F is too tight, I have a solution with the same time complexity but just couldn't pass.
About 2E8 calculations for 2sec is just too tight.
Well, judging by Petr's comment the TL was not tight enough.
D pretests were hilariously weak. My pure bruteforce with bitset went upto test 127 and I have no idea how
Edit: ok so i got it to pass . I think this would be hard to hack imo. (aand someone hacked :p)
With some optimization i was able to pass systests.
Submission
Feel Free to hack ^_^
what is that j = rng()% n + 1 thing doing
i guess that made it pass.
It is selecting random index between 1 and n.
Lmao I made a tiny optimisation to mine and it passed Edit: nvm hacked
I think it was easier to solve D with greedy than other methods. I just figured the best indices for putting 1 and then greedily replaced some 1's with 0's in order to get the best solution. However, It would be quite interesting if someone can find a counter case for my solution. Here's my submission 117911118
PS: Its runtime was better than bitset solution :)
Check this test case: 6 3 2 101 101 101 010 010 010
Your solution gives: 010
According to the problem it requires subset of max size so answer should be — 101
Feel free to correct me if I am wrong.
it is even funnier because test 127 was sunzh's hack (and it was the only hack to this problem during contest)
(I also failed on 127, same as the only second person from my room who passed pretests on D)
sharath101 why did you use
bitset
, asm < 61
we can uselong long
right? or isbitset
faster thenlong long
.I used bitset of length n, since there are n friends
Can anyone please tell me, how to solve E using linearity of expectations?
although this is a coding contest it really makes me doubt my english skills
Asking for hack: my submission for D
My solution is totally wrong but I can't hack it.
What was TC #8 of test 2 for C? I couldn't find a test case where my solution fails and didn't understand how to test my solution for an edge case.
Hey did you find any test case please let me know if yes. Thanks in advance :)
Nope.
But you can try:
1
12
1 1 2 1 3 1 4 1 2 1 5 2
Thanks buddy
Shittiest contest ever. problems were shit with no new ideas. the hardest part in the contest was understanding the statements :\
C has a really nice solution, though you are right at the statements part lol.
Although you might have solved C in a few minutes.
Well, to be honest, I think C is quite boring problem. Although I didn't think the statement of C is bad during the contest, it seemed just another basic stack problem anyway.
Any help in intuition to stack please.
I don't know if I am missing some silly point or what but I don't get how could someone think of a stack on this problem in a contest.
I particularly mean, some intuition or proof that using a stack will always yield an answer (wherever possible).
Editorial also says stack solution. I think just greedy approach can reach this solution.
just use a vector and popback
When the problem is way below your level, it's extremely rare for you to find it interesting. It's just a matter of different perspectives here.
no, it's because it's readforces, not a genuinely hard problem.
I'm not here to argue with you C is a good/interesting or not (it's trivial to me if you really want to know).
The point I want to make is that: when a person finds something nice about a problem, let him enjoy it. It's unnecessary to ruin his fun when you're at a much higher level.
C is nothing new.
Your CF Round #684 was infinitely times worse. Just stupid implementation crap. Except problem C all other problems were very good.
the problems a,b,c were trivial. just implementation. statement of c would have been unclear without picture but with picture and example it was clear. the idea of hosting div1+div2 is weird. obviously 1k+ people of 2100+ rating would beat all div2 participants. thats why people think not giving contest is better than competing in a contest with 1000+ offset rank
your opinion is wrong. i understood the statements, so you’re just making excuses for getting negative delta.
i wrote some toxic trolling shit and it got +15
wannahavealife just wrote “yes” and got -13
#SayNoToRatism
I am a newbie in cp and i faced tle in problem A can someone tell why did i faced tle and how can i optimize my solution https://codeforces.me/contest/1523/submission/117914318
Try
$$$M$$$ can be upto $$$10^9$$$.
run loop till min(n,m):
Can anyone help me in getting a better intuition to how the solution of C works?
the intuition of stack: since suppose we are at inner of some list, and if the current value is not able to further increase the inner list to we need to go just outer list, this gives an the intuition of stack.
at any time stack will look like this(it will tell the just previous of current list) ''
{ .
.
1.3.2
1.3
1
}
now see ai:
2.1 first extract last no. in top of stack like(1.3.2) is 2 lets say it is temp.
2.2 if(ai==3) or ai==temp+1, it means we can add list like 1.3.3, so just remove 1.3.2 from stack and push 1.3.3
2.3 else if(ai==1) it means that we can't increase 1.3.2, but we can make like 1.3.2.1(think why it always works :) ) so just create 1.3.2.1 and push in stack
2.3 else we need to go outer of current list so just pop(it will never become empty if solution exist)
My solution
Is this a valid list?
1
1.1
1.2
2
1.3
just want to confirm that can we add 1.3 (a deeper level to 1) after 2? And also is 2 lexicographically greater than 1.3? I know it's a basic level doubt but I am a bit confused, please help. Thanks.
this is the wrong a/c to question it should be in lexicographical order
1 1.1 1.2 2(this place you went out of this list (1.1,1.2)) 1.3(so you cannot write 1.3 here see question pic for more clear)
No 1.3 is not lexicographically smaller than 2
Then why can't we add 1.3 after 2 if 1.3 is lexicographically greater?
He typoed, 1.3 is lexicographically smaller than 2. They are compared with their first characters, then the next, then so on.
It's just how an outline works! Such as when you have something such as
Can anybody explain why my solution of A is giving TLE? My approach is same as that in editorial.
Check is O(n), because a whole string is given to function (not a pointer). Try to change bool check(string s, int i) into bool check(string &s, int i).
You are doing a very trivial mistake.
So i made this change in your code and it is Accepted. Submission:117923870
In problem F, I also needed a priority queue to determine the correct order of processing the states, therefore my solution had an extra log factor. How do you avoid this?
Found the answer in ecnerwala's solution: the states F(mask, x) are always visited in increasing order of x for the same mask, so we can just iterate over mask after each quest and check if "x" should be increased to visit appropriate F() states.
Problem D test cases were weak(brute force accepted up to >100 test cases), but not that weak sothat bruteforcer can crack system testing. :))
good editorial
Not sure if this has been pointed out before, but it looks like CF's judging servers are non-deterministic with respect to TLEs?
This submission 117921507 and this submission 117923079 contain exactly the same code, with the exception of a comment on the last line. The first one ACed in 2.4 seconds, the second one TLEd. Does your code run slower when CF's servers are under load?
This is becoming most downvoted editorialಥ‿ಥ Why all those downvotes?
round was bad. most people solved only A, B and C, because D was hard
This itself doesn't mean round was bad
Also it makes more sense to downvoted the announcement if you're unsatisfied with the round, not the editorial
Just curious, was it your intention to fail the 3^N check solutions for D? I fst'ed it with tle on tc 126, but afterwards got AC by just using fewer trials. I feel like it's not necessarily a bad thing to force n*2^n, I just wish the constraints made it slightly more clear
In Problem C is it ideal to use the stack data structure? You can't iterate through a stack without deleting everything and you need to iterate to print out the corresponding line.
You could keep an additional string and delete the last two characters each time you pop but that only works if you only have single digit numbers.
Or you could copy your stack each time your print but it seems simpler to just use an array.
today, english IQ mattered more than spatial or memory IQ
I'm not sure whether it has to do with English...
Anyway, in C I just skipped all the gibberish and figured out the true statement from the picture and examples. Not the first I had to do this
In Problem E this Sentence:
Shouldn't the formula be $$$\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \dots (n - p + 1)}$$$? $$$+1$$$ instead of $$$-1$$$? For $$$p=1$$$ we get $$$\frac{1!}{n }$$$ with $$$+1$$$ which should be right.
Me after getting AC on D after a dumb brute force : OMG I m gonna be CM !
Me after system : How to stop crying :sob:
In the editorial for E it says "Now for a new value of p we need to calculate the number of fitting states. Without the condition about having a distance of k between the lights this number would be equal to the number of ways to split the lights into p groups, which is C(n−1,p−1)." So lets say n=3 and p=1. Then we get C(n-1,p-1) = C(2,0) = 1. But there are three ways to choose a single lamp to turn on, so there would be three fitting states. Am I not understanding this editorial correctly or is there a mistake in it?
can anyone explain why this thing happen??? Ques1.suppose time limit for per test case is 1 sec and there is 1000 test case then how much time provided by codeforces for running of whole program 1 sec or 100sec? Ques2. i m try to add some overhead(1e9 loop) but exec time is not change why??? 117926727 and 117927902
1) 1 sec 2) It might be because the compiler recognizes that the a and b variables don't do anything, or it sees the for loop and concludes that the only thing that the for loop does is change the variable a to the value 10.
FST Problem D :(
Editorial for D seems a little too minimalist but maybe just me. Can't follow along the first paragraph much less the second
For anyone else having problem here is explanation I can understand from yash_daga in the announcement post:
"Randomised solution for D:
Randomly take a friend and assume it to be the part of the final subset now compare all other friends with this subset and use submask dp in the end to find maximum size subset. I took 25 random friends."
BTW I found ksun48 solution super clean and similar to the idea in the tutorial: 117886225
UPD: Don't know why rerunning this cannot pass timing tho.
UPD: here is my AC’ed solution — 118024463
From this editorial we discovered that the probability of the contestant being hit by a falling meteorite is $$$10^{-6}$$$. But how did you calculate this probability?
I think it's roughly equals to (number of CF users hit by meteorite fall) / (number of meteorite falls)
For each participant, they rolled a 10^6 sided dice using random.org. If it came up as a 1, then they used their advanced space technology to locate the contestant and hit them with a meteorite.
Editorial for task E doesn't really explain why the result proposed is the expected value which needs to be calculated. Is it true that expected number of lights turned on is $$$1$$$ + expected number of moves such that the machine is still not broken? Although I got an AC, I'm still not convinced why it works...
https://math.stackexchange.com/questions/497101/showing-that-rm-ex-sum-k-0-infty-pxk-for-a-discrete-random-variabl
I think this answers your question. Note that P(X > 0) = 1; and then P(X > k) is the probability that there are k lights on and we haven't stopped (there will be at least one light more, so X > k). This formula is true for continuous case (with integrals)
Problem E
Why so unclear editorial? Or i so stupid? I don't understand that $$$C(n−(k−1)⋅(p−1),p−1)$$$ is right for amount arranged p lights with min dist greater or equal k. we can choose $$$p$$$ lights from $$$n-(k-1)(p-1)$$$ and put after first $$$p-1$$$ chosen lights by $$$k-1$$$ another lights, so $$$C(n-(k-1)(p-1), p)$$$ i understand. is it typo?
Why summarize this values multiplying by $$$C(n, p)$$$ we got the expected value? expected value is sum of value multiplying by probability. We have amount arranged p lights with min distance greater or equal k and we need put $$$p+1$$$ light to destroy this condition and finish device. I don't understand how we took into account this
I think some of the formulas in the editorial have off-by-one errors, as I had slightly different formulas when upsolving. In any case, I can try to explain the idea.
First, we can rephrase $$$\text{E[lights turned on at the end]}$$$ as $$$\text{E[number of turns the machine works for before breaking]}$$$, and split that into $$$\text{E[number of turns the machine works for and is one turn away from breaking]} + 1$$$. Now we want to count the aforementioned quantity.
We can count more generally the number of states the machine can be in, such that no subarray of size $$$k$$$ has more than one light turned on. Say the machine has worked for $$$p$$$ turns, meaning it has $$$p$$$ on lights. In order for the $$$k$$$ condition to not be violated, there must be at least $$$k - 1$$$ off lights in between on lights. We can think of this as a stars and bars problem: our $$$p$$$ lights function as bars separating the remaining $$$n - p$$$ off lights into $$$p + 1$$$ groups, where the middle $$$p - 1$$$ groups must each have at least $$$k - 1$$$ off lights. If we start by putting $$$k - 1$$$ off lights into each of the middle groups, then we are left with $$$n - p - (k - 1) \cdot (p - 1)$$$ off lights to distribute into $$$p + 1$$$ groups, where groups may be non-empty. Applying stars and bars (specifically theorem two in the link above) gives us $$$\binom{n - (k - 1) \cdot (p - 1)}{p}$$$.
Now, not all of these states can lead to the machine breaking in the next turn. But each state contributes some amount to the answer. The machine starts at the state of $$$n$$$ off lights, then each turn it moves to another state by turning on some light, until it breaks. So the number of turns the machine works for is the number of states on the path it takes. Each state thus contributes $$$+1$$$ to the answer for the probability that it is on a path. The probability that a state with $$$p$$$ on lights is visited is $$$\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \cdot \dots \cdot (n - p + 1)}$$$. Editorial says $$$n - p - 1$$$ but I believe that's a typo. This comes from the fact that we can pick any of the $$$p$$$ lights first, then any of the $$$p - 1$$$ lights second, and so on. Similar logic is applied to the denominator.
So our final answer is thus $$$1 + \sum_{p=1}^n \binom{n - (k - 1) \cdot (p - 1)}{p} \cdot \frac{p!}{n \cdot (n - 1) \cdot (n - 2) \cdot \dots \cdot (n - p + 1)}$$$.
Thank you!
Do I understand this right: The $$$1+...$$$ from the final answer is from the fact, that there is a 100% chance to end in an ending state, i.e. one with 2 lights in $$$k$$$ consecutive lamps (We didn't count those states yet so we are not overcounting this way)?
Edit: Thank you very much for the explanation! It explains critical aspects that have been left out in the editorial and helps really much. I was able to write my own solution from scratch and to understand and reconstruct the proof for this solution with your help!
thanks, nice editorial. But the most unclear moment for me you explained not good or i amn't smart). why this sum is expected value? So let be a good state is state such that no subarray of size k has more than one light turned on. $$$PG_p$$$ is probability be in good state with p lights turned on, $$$PT_{p+1}$$$ is probability be in terminated state with p+1 lights turned on. then answer is $$$E = \sum_{p=1}^{n - 1} (p + 1) PT_{p + 1}$$$. From good states with p lights we can move on good states or terminated state with $$$p+1$$$ lights with $$$trg_p$$$ and $$$trt_p = 1 - trg_p$$$ probabilities respectively. then $$$E = \sum_{p=1}^{n - 1} (p+1)PG_p * trt_p= \sum_{p=1}^{n-1}PG_P + \sum_{p=1}^{n-1} p * PG_p - trg_p * PG_p - p *PG_p * trg_p$$$ so why $$$\sum_{p=1}^{n-1} p * PG_p - trg_p * PG_p - p *PG_p * trg_p = 1$$$ ?
I'm not too sure where the right hand side of your equation, $$$\sum_{p=1}^{n-1} PG_p + \sum_{p=1}^{n-1} p \cdot PG_p - trg_p \cdot PG_p - p \cdot PG_p \cdot trg_p$$$, comes from. I think it might help to not think of expected value as the canonical definition of $$$\sum_{p=1}^n p \cdot \text{P[machine terminates with } p \text{ on lights]}$$$, but instead think of it as a contribution of each state to the answer. One way of considering that is as counting states on paths as per the last paragraph in my comment. Another way that might be more straightforward is _Bishop_'s comment below, where we split the expected value further into sum of probabilities that the machine is still working after each turn with linearity of expectation.
Oh, thanks. After hard thinking i am understood your last paragraph and explanation of sparshsinha123
I'll leave one comment. I didn't use rephrasing of expected value, and calculated probability to terminate with p+1 lights using following logic. Let's say we know it didn't terminate at state with p lights, then it will split into (n-p) new states with (p+1) lights. Some of them are terminating, and some of them are not. But we know formula for non-terminating! Thus, we get number of non-terminating with p lights on, multiply by (n-p) and then subtract number of non-terminating with (p+1) lights on. All we have to do is multiply by probability of this state: factorial(n-p-1)/factorial(n), and multiply by value of this state (p+1). 118170317
thanks, this problem has so many smart approaches) But you has some inaccuracy. first, you calc not amount non-terminating state but amount non-terminating state with order of turning on lights, what simplify transform to $$$p+1$$$ lights multiply by $$$n-p$$$. second, factorial(n-p-1)/factorial(n) isn't probability, it's amount of select $$$p+1$$$ lights from $$$n$$$ with order.
Yes, I take order of lights into the state as well. And factorial(n-p-1)/factorial(n) is exactly probability to reach that particular state.
Thank you for the clarification!
Your explanation is very clear and I feel that the official explanation is very irresponsible (for me personally).
I have been troubled by the official question for a long time. Thank you again!
Let $$$X$$$ be the random variable denoting the number of lights turned on when the $$$\textbf{process terminates}$$$. Now, let us try to figure out the value of $$$P(X \geq d)$$$. It is easy to see that for $$$d \geq 2$$$ , the value of $$$P(X \geq d)$$$ is nothing but probability of arriving at sequence of $$$(d-1)$$$ elements such that each pair of adjacent elements are $$$k$$$ apart. And that $$$P(X \geq 1) = 1$$$. Now, once this is done we can write
$$$E(X) = P(X \geq 1) + P(X \geq 2) + \dots + P(X \geq n)$$$ as $$$X$$$ is discrete random variable.
thanks
Need help with the solution of problem C. Getting WA on test case 2.
Solution link: 117905624
D is also solvable without SOS dp, using what's maybe considered bitset cheese: https://codeforces.me/contest/1523/submission/117930426
Not sure how my solution worked for D, but it is not randomized and falls well under the time limit.
I basically made as many optimizations as I could think of, hoping that it would land me under the time limit. I treated each string as a bitmask, then counted how many times each bit showed up. I then removed all the bits that had less than $$$\lceil\frac{n}{2}\rceil$$$. With the modified bitmasks I combined strings with the same bitmask using a map that stored each bitmask and the number of times it occurred (oc). I then took each bitmask and added the number of times it occurred to all of the subsets of that bitmask, to find out how many times each bitmask showed up throughout all the strings. The last step involved finding the bitmask with the most ones that had at least $$$\lceil\frac{n}{2}\rceil$$$ total occurrences. I'm not sure what the overall time complexity is.
Hacked with a generator I stole from someone else :P
I also write in bitset,I see that someone said it can be hacked,I just want to get a hack data so that I can improve my code,ses there[submission:117940279]
Could you please tell me what the "submask" means? I just can't understand this word.
That's true for Prob. D as well :(
I'm guessing you did that. If so, that is the reason why you are green
But the editorial also says that you need to notice that 1 2 1 2 1 2 works for all cases
How is that noticing done? Isnt it done by hit and trial? I tried to hit and trial that too. Sometimes it works sometimes doesnt.
Isnt writing a simple recurrence solution (which takes max 10 min) and running it to find out the solution best way?
How did you notice the 121212 thing? Is there a logical procedure to notice it?
N is even and max no of operations is 5000, n <= 1000. It is important to note when you do operation 1, the sum replaces the number at the larger position, and for operation 2, the subtraction replaces the number at the smaller one
Since N can be as small as 2, it means it is possible for just 2 numbers meaning you can apply exactly what you did for n=2 to all 500 pairs of numbers in max case(n=1000) and n is even which means you wont have to worry about an incomplete pair. Therefore for n = 2, it can be done in at most 10 operations.(500*10=5000)
After coming up with 1 2 1 (trial and error or algebra, not hard to get) notice that the 2nd number is negative but in the 1st position while the 1st number is still positive in the 2nd position
we are here now, how do you get -5 -7?
this is like: we got from a b, to -b a(we are in the right track since we managed to negate the second number but switched the positions), how to get to -a -b? to do that, you need to negate the second number(which is currently a) and switch the positions of the two numbers and 121 does what we need...
212 also does the required task, thus how 121121 or 121212 is gotten(at least for me)
I support you that writing the bruteforce here is a totally acceptable way and it is not necessarily a sign of being green. Actually I got lucky and managed to find a solution manually, but I was questioning the validity of such method while doing so thinking of coding bruteforce
Ah, could anybody tell me why my solution D get WA on 80? 117914897
I found some of AC codes are just like mine.
I can tell you why. Because some hackers will observe your code and make a set of data according to your random process. To avoid that, I recommend you to use a unique random seed or just
srand((unsigned)time(0)*time(0));
. Wish you good luck.thanks.
I think my solution is not good enough.
I found that most of these AC codes got WA now.
Can you prove me how the editorial solves problem B?
Let us take an array of just 2 elements, say {a, b}. Think of it this way. We can convert it to {-a, -b} in 6 moves. How?
Look carefully at these operations:
(I will be denoting the first element as x and second as y, because their values will be changing)
x = x + y; now we have {a + b, b}
y = y — x; {a + b, -a}
x = x + y; {b, -a}
So we can see that these 3 operations got us from {a, b} to {b, -a}. It is clear enough now, that applying these 3 operations again on some {x, y} will get us from {x, y} to {y, -x}, i.e. {-a, -b}.
I hope it was clear. If it wasn't, please let me know!
Here's the simulation for the first pair ($$$a_1$$$, $$$a_2$$$). Similarly we apply the same six operations to the rest of the pairs . The number of actions is $$$n / 2$$$ pairs * $$$6$$$ operations $$$ = n * 3 $$$.
Type 2 :
$$$a_1$$$
$$$a_2 - a_1$$$
Type 1 :
$$$a_1 + a_2 - a_1$$$
$$$a_2 - a_1$$$
Type 2 :
$$$a_1 + a_2 - a_1$$$
$$$a_2 - a_1 - a_1 - a_2 + a_1$$$
Type 1 :
$$$a_1 + a_2 - a_1 + a_2 - a_1 - a_1 - a_2 + a_1$$$
$$$a_2 - a_1 - a_1 - a_2 + a_1$$$
Type 2 :
$$$a_1 + a_2 - a_1 + a_2 - a_1 - a_1 - a_2 + a_1$$$
$$$a_2 - a_1 - a_1 - a_2 + a_1 - a_1 - a_2 + a_1 - a_2 + a_1 + a_1 + a_2 - a_1$$$
Type 1 :
$$$a_1 + a_2 - a_1 + a_2 - a_1 - a_1 - a_2 + a_1 + a_2 - a_1 - a_1 - a_2 + a_1 - a_1 - a_2 + a_1 - a_2 + a_1 + a_1 + a_2 - a_1$$$
$$$a_2 - a_1 - a_1 - a_2 + a_1 - a_1 - a_2 + a_1 - a_2 + a_1 + a_1 + a_2 - a_1$$$
Final :
$$$-a_1$$$ $$$-a_2$$$
Thanks for the brilliantly written problem statements and the excellent pretests! /s
Can anyone post some resources on how to solve Problem-D. I didn't get the slightest idea!
All you needed to know was SOS DP
I misread the problem A and thought solution should be O(n). Can someone tell me why my O(n) solution(probably) is taking same time as O(n^2) ones. 117909301
Hoping that your team can supply standard codes to leave a deeper impression to readers.
Understanding problem statement C
https://codeforces.me/contest/1523/submission/117903296 solving 1523D problem without random
hacked
is there any way to solve problem d without random??
Technically yes; 117932090 is an $$$O(n \cdot 2^p)$$$ solution with bitset with the max test taking about 2 seconds. I don't think there's a deterministic solution with a better complexity though.
You can try doing $$$O(\cdot 2^{2p})$$$ as well since there are at most 2p column that have at least $$$\frac{n}{2}$$$ ones, but it would probably be a bit too slow to pass. Incomparable complexity though.
Isn't the complexity $$$O(n \cdot 2^\text{usefuls.size()})$$$ and
usefuls.size()
can go up to $$$2p$$$ ? (unless theif(other.count() >= threshold)
is pruning significantly, in which case can you explain why?)That program works in $$$O(n \cdot masks)$$$, where $$$masks$$$ is the number of masks which are submasks of at least $$$\frac{n}{2}$$$ friends. If $$$masks>2^{p+1}$$$, then the total number of submasks over all friends would have to be greater than $$$\frac{n}{2} \cdot 2^{p+1}=n \cdot 2^p$$$, which is impossible because each friend has at most $$$2^p$$$ submasks.
Got it. Thanks!
I've got a solution : Submission
Not sure it's correct or not.
I brute-forced like at most 30 bits. I think that my solution run at most checking $$$2\cdot 2^{P}$$$ possible combinations, but I can't prove it.
How about mine?
hacked. It would be interesting to see how many solutions from contest would fail after rejudging on all of the hacks that were created after contest. But we will probably never know.
BTW I used really simple test case: n/2 with first 15 bits set and n/2 with last 15 bits set. It would most probably fail even on n/2 with first 15 bits set. Strange that such test case wasn't part of 120+ system test cases
yeah, I knew it was going to get hacked. I thought of shuffling the bits its self. But, I was eventually mistaken as probability of failure is high. I just hoped that it will pass and it did xD. I am really lucky!
[DELETED]
Problem A, B, and C have difficulty level of 800, 1100, and 1600. Increment of 300 and 500. Seems fine. Problem D have difficulty level of 2400. Increment of 800! I know contest had huge prices but think about Div-2 also from next time please.
The problem ratings are unknown before the contest, and are calculated after the constest on a basis of how much people solved them.
So we can consider the problemsetters try to choose them best balanced as possible, but sometimes the results are not exactly like expected.
Can anyone explain why in the second test case of Problem D, we cannot include currencies 2 and 4?
Input 5 5 4 11001 10101 10010 01110 11011
Output 10001
Currencies 2 and 4 are also liked by ceil(5/2) = 3 friends. If we include those currencies (11011), we would have a larger set of 4 currencies than the expected answer of 2 currencies. I am missing something, can anyone point out what it is?
Only the fifth person does like all currencies of
11011
. So the set11011
is only liked by one person and $$$1<3=\left \lceil{5/2}\right \rceil $$$. So11011
is no valid set of currencies.Everyone in the group needs to like every chosen currency!
Thanks for the clarification. I assumed that the ceil(n/2) people can be different for each currency.
I have a solution for problem A with a time complexity of O(n): At each lamp, we store the distance from it with the closest lamp on it's left and right After that, we loop through all lamp, if it was originally off, and now the min of the distances are smaller than k and the distances are different (so that they won't be turned on at the same time) then turn on that light
My solution
Isn't Problem D just this I wrote the exact same solution.
https://codeforces.me/problemset/problem/364/D ( ͡° ͜ʖ ͡°)
I'm receiving a strange verdict in Problem C:
wrong answer Last number in the line does not match the one that Vasily had (test case 2)
But all numbers in output seems match with given numbers.
Submission: https://codeforces.me/contest/1523/submission/118022778
Test 1, Testcase 2:
Input gives last numbers: 1 1 1 2 2 1 2 1 2
Your solution gives last numbers: 1 1 1 1 1 2 2 2 2
Positions 4, 5, 6 and 8 of the list are wrong.
TL for problem H is so tight that solutions handling queries online with RMQ constructed in $$$\mathcal O(n)$$$ time can hardly pass :(
What is the solution for G?
How to solve G?
Added editorial for task G. Sorry for the delay.
I coded the approach in editorial for Problem D, https://codeforces.me/contest/1523/submission/118136038 .
I am getting TLE on test case 136 which seems to have been added after the contest, as there were only 127 tests for solutions submitted during the contest.
What optimisations should I do to pass the newly added testcase? Thanks.
The Editorial Solution for Problem D is pretty short, so I want to list my steps (I have also seen them in many other solutions) to solve this task, including the explanation for the steps. I will try to emphasize steps which were problems for me, in the hope to help people who are intimidated by this problem (as I was).
We will make a randomized solution. Since we need to find $$$\lceil \frac{n}{2} \rceil$$$ friends which share a common set of currencies, we can just query about 20 random friends, look at their favourites and only work with those. Instead of querying 20 random friends, you can also shuffle all friends and then query the first 20 of them. This will fail with probability $$$1:2^{20}$$$ which is very low.
Now we have chosen one person. Let's call him Carl. Carl likes only 15 currencies at most. To easier work with the masks we will compress those. We ignore each currency, which is not liked by Carl. This can be done easily with a vector
bitPos
which stores the bit-positions of all the currencies which Carl likes. We later will need this again to decompress the answer. In the next steps we will only regard Carls currencies and ignore all the other ones. See this example (Example 2 from problem description):Now we want to create an array
A[1<<15]
. For amask
of currencies the valueA[mask]
shall be the amount of people (Carl including) who like exactly the currencies inmask
:Now we have
A[mask]
. Now we want to calculate aB[mask]
. For amask
of currencies the valueB[mask]
tells us the amount of people who like at least the currencies inmask
. They can like even more currencies. So it's the amount of people, such thatmask
is a submask of their favourites. See this example forA
andB
:How do we calculate
B[mask]
? Well, it follows that $$$B[mask] = \sum\limits_{supmask \supseteq mask} A[supmask]$$$. There are several algorithms, which are better explained on other sites, like an O(3^n) algorithm here or an O(n*2^n) algorithm with very thorough explanation here. Just be careful: those sites show how to calculate the sums for submasks. We need the sum of supermasks here! The algorithms can be easily adapted for this. e.g. by inverting all bitmasks or by simply reversing arrayA[]
(and reversing it again in the end) or by alternating the treatment of the bits, as explained here e.g.. This is the most difficult algorithmic part of this bugaboo (at least it was for me) but also the one you will learn the most. So take your time here!Now we have
B[mask]
. We can check allmask
s forB[mask] >= roundUp(people/2)
. The mask with the highest amount of bits is our answer for Carl. Now we need to decompress our mask withbitPos
and can return it. Don't forget, we only tested for Carl, we now need to repeat this for other people.Here is my commented submission: 118139752
I hope it helps. Feel free to ask me questions or correct my mistakes!
Appreciate your detailed explanation! I would like to ask if it is a good idea to set the number of iterations as $$$ k $$$ times only, let's say $$$ k = 30 $$$, instead of $$$min(k, (int) like.size()) $$$.
For test case #144, there are only two friends which means there are two iterations if you take the minimum value. Another similar case is #162 with 4 iterations. I failed these two cases sometimes only with
wrong answer The contestant's answer is either too small
.Test Case #144
Test Case #162
You need to distinguish 2 ways to obtain random people. You can either choose 20 people at random:
Or you can shuffle first and then chose the 20 first people:
If there are only e.g. 2 People, then you still need 20 tests in the first solution. Using only two checks then your failure could happen, e.g. we test person 1 twice. But in the second solution 2 will be enough, because we can guarantee that we checked ALL people.
You can also modify the first solution and mark people you already checked and don't repeat them, then 2 checks would be enough too.
I figure out what I did wrong. I misplaced that shuffle function inside the for-loop. Learned a lot from you. Thank you very much!
Thank you mateee... And definitely the super set part was really tough to get...
Thank you soooo much :)
I have a question; what is a supermask?
I know what a submask is :P
UPD: I think I understood what a submask is from the solution. Thank you so much for your effort <3
It seems that you understood already, but just for the record: If $$$m$$$ is a submask of $$$M$$$ then $$$M$$$ is a supermask of $$$m$$$ and vice versa. You can also see this in the example, if we calculate e.g.
B[100]=A[111]+A[110]+A[101]+A[100]
.B
is the sum of all its supermasks inA
.Am I the only one who need proof of 1523C - Compression and Expansion?
Solution to H with an extra log in complexity: 118228547. Yes, the pragmas are super important.
Both in precomputation and in answering queries, I have classic states "if we make up to $$$2^e$$$ jumps, how far can we get for each $$$k$$$?". I precompute for each possible $$$e$$$, but when answering a query, I remember the largest number of jumps that's not enough to reach $$$r$$$ and how far we can get for each $$$k$$$ for this specific number of jumps (it's very binsearch-like). Incrementing a state by $$$2^e$$$ jumps is easy in $$$O(k^2 \log)$$$, where the log comes from a Fenwick tree. In total, we precompute $$$O(n \log)$$$ times and for each query, we also update states $$$O(n \log)$$$ times, where these logs come from $$$2^e$$$ jumps for which we precompute. In total, it's $$$O(n k^2 \log^2)$$$.
The key optimisation is choosing the right order of data in memory! Instead of Fenwick trees on arrays of integers for each $$$2^e$$$ and each $$$k$$$, we put whole chunks of $$$31$$$ values (max. distance for each $$$k$$$) into Fenwick trees for each $$$2^e$$$. Complexity is the same since a query on one tree is $$$O(k \log)$$$ now, but the operations done inside the trees are very simple: given an input chunk $$$in$$$ and an output chunk $$$out$$$, for each $$$k$$$ from $$$0$$$ to $$$30$$$, do $$$out_k = \max(out_k, in_k)$$$. This can be vectorised easily. Any overhead is only $$$O((n+q)k^2 \log)$$$. Bam, 6 times faster code!
In fact, if we use chunks of $$$32$$$ shorts, that's 64 bytes, perfectly fitting in cache lines and ~30% faster.
For problem A ive exactly coded what the editorial says but i get tle please help me find the problem here is my submission 118242830
a[i]
is0
or1
for the first two if-statements.vector<char> a
.a[i]
is same ass[i]
.t
copied froms
, perform in-place modification ont
and swap them after each iteration. Outputs
at the end.t[i]
should be0
or1
.I'm coming from python and so I thought strings are immutable and also yea I forgot to check if the actual cell was dead or not I made the chages and it's accepted now Thanks!!!
In C++, you can assume everything's mutable — if you're wrong, the compiler will tell you.
can somebody please tell my why my solution of D just barely passes with complexity O(iter*(3^p+np))?? submission : 118593142 it takes 2979ms, can you please tell me some improvements which can be done in code to make it run faster? thanks.
How should we process the states of DP's in F? The best I can do is probably $$$2^n*n*m^2$$$, because of the need to recalculate all F states after each visited quest. How do we avoid this? Also, I've seen some people using priority queue to determine the correct order of states. How do you use it?
Can anyone just look at my code for problem C:-compression and Expansion. I think I did the same thing as in editorial but not getting ac
solution link- https://codeforces.me/contest/1523/submission/120891492
G can also be solved in $$$O((n+m) \log^2 n)$$$ time although in practice it barely passes. You can find the first segment with length at least $$$k$$$ which is inside some given segment $$$[l,r]$$$ in $$$O(\log n)$$$ online with $$$O((n+m) \log^2 n)$$$ preprocessing of a fixed set of segments (all the segments given in the input) using fractional cascading, sparse tables, and some clever coordinate transformation tricks. I had to switch to a different technique when $$$k$$$ is small to pass. Code: 121348828 Probably using $$$O(n)$$$ RMQ would be even faster.
How to prove in H that minimizing $$$cnt$$$ has always higher priority than jumping further to right after some $$$t$$$ steps?
i not understood the 2nd question properly anyone can explain me please