Блог пользователя jqdai0815

Автор jqdai0815, история, 8 часов назад, По-английски

Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.

Sorry for the weak pretest in problem B.

hints and code are WIP.

2061A - Kevin and Arithmetic

Hint 1
Solution

2061B - Kevin and Geometry

Hint 1
Hint 2
Hint 3
Solution

2061C - Kevin and Puzzle

Hint 1
Hint 2
Solution

2061D - Kevin and Numbers

Hint 1
Hint 2
Hint 3
Solution

2061E - Kevin and And

Hint 1
Hint 2
Hint 3
Solution

2061F1 - Kevin and Binary String (Easy Version)

2061F2 - Kevin and Binary String (Hard Version)

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution

2061G - Kevin and Teams

Hint 1
Hint 2
Hint 3
Solution

2061H1 - Kevin and Stones (Easy Version)

2061H2 - Kevin and Stones (Hard Version)

Solution

2061I - Kevin and Nivek

Hint 1
Hint 2
Hint 3
Hint 4
Solution
  • Проголосовать: нравится
  • +51
  • Проголосовать: не нравится

»
7 часов назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
7 часов назад, # |
  Проголосовать: нравится +26 Проголосовать: не нравится

Congrats jiangly on becoming jiangly :))

»
7 часов назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
7 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)

»
7 часов назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same

»
7 часов назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?

»
7 часов назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

E is a greed-bait.

»
7 часов назад, # |
  Проголосовать: нравится +21 Проголосовать: не нравится

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

»
6 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

»
6 часов назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

nvm

»
6 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

C is a nice problem. Thank you

»
6 часов назад, # |
  Проголосовать: нравится -18 Проголосовать: не нравится

1636142131289

Same same but different

»
6 часов назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.

»
6 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD

»
6 часов назад, # |
Rev. 2   Проголосовать: нравится +13 Проголосовать: не нравится

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

  • »
    »
    5 часов назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?

    I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P

    • »
      »
      »
      4 часа назад, # ^ |
        Проголосовать: нравится +9 Проголосовать: не нравится

      Take a look at the first hack, it is indeed that test so it was added by hacks

      • »
        »
        »
        »
        4 часа назад, # ^ |
        Rev. 2   Проголосовать: нравится +15 Проголосовать: не нравится

        Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."

»
6 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?

  • »
    »
    5 часов назад, # ^ |
    Rev. 3   Проголосовать: нравится +2 Проголосовать: не нравится

    You could do the same thing with brute force up to storing all "improvements"; say arr[i][j] = num(i, j+1) - num(i, j). Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.

    Edit: nvm this is completely wrong

    • »
      »
      »
      5 часов назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.

      • »
        »
        »
        »
        5 часов назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        wait yeah you're totally right. I completely fakesolved it

  • »
    »
    4 часа назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

it's over

»
5 часов назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

  • »
    »
    5 часов назад, # ^ |
    Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

    I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??

    • »
      »
      »
      3 часа назад, # ^ |
        Проголосовать: нравится +7 Проголосовать: не нравится

      Complexity of order 10^8 can easily hit TL. Constructing 1e7 2D vectors by randomly appending 3e7 elements takes 2 seconds on custom invocation because cache and indirection; that's still an order of magnitude less loop executions than this case.

      There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7 (or 10^6 in older times, but CPU performance hasn't been increasing all that much) and it's still with caveats — even if it's plenty in 99% of cases, there could be many hidden instructions behind C++ expressions sometimes. You can't base real runtime on a number without specific code features, like the ones I mentioned above.

      Examples: At IOI 2012, I burned myself by reconstructing std::queue ~10^6 times — literally adding static keyword made the code several times faster. In 1975G - Zimpha Fan Club, complexity in the order of 10^9 exceeded TL simply because division with dynamic modulo can't be optimised by compiler and therefore uses divq which has high CPI. Many times I've sped up complicated DFS-processing a tree by doing a simple preorder renumbering at the start and the end. Then there's this meme I made simply because it's such an efficient and simple optimisation.

  • »
    »
    4 часа назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

    I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

    • »
      »
      »
      4 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      can you please check this solution for E its giving tle on 19 https://codeforces.me/contest/2061/submission/302147220

    • »
      »
      »
      4 часа назад, # ^ |
      Rev. 2   Проголосовать: нравится +13 Проголосовать: не нравится

      Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.

      That said, if it does not pass, it is easy to change to a pre-computation. So the options are like

      • do the simple thing, -50 and a few minutes if wrong, nothing else if right
      • do the correct thing, down maybe a minute but definitely OK

      I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.

  • »
    »
    4 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).

    The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.

    • »
      »
      »
      3 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I mean we don't need to formally prove convexity — or even anything remotely close to that. What I described is intuitively directing towards the proof in editorial, but isn't itself a proof, just intuition. The kind of thinking that makes sense but could easily overlook something. A solution can be presented in full to demonstrate a problem's difficulty or handwaved to make it seem easier — I know, I've done it.

      I think with more obvious constraints, C/D/E are basically interchangeable. C and D have basically equal number of solutions and there's always a dropoff for later problems, though it's not too steep early on.

  • »
    »
    4 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).

    • »
      »
      »
      2 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      It's not too bad and it's a useful skill that helps from time to time in various problems anyway, but I don't think it should have a major role in not-too-hard problems. Sometimes you screw up implementation, don't pay attention to constant factor and your code is slow. Sometimes you can squeeze an inefficient algorithm through very efficient implementation, even with tricks of its own. If it's one of the last problems that requires constant optimisation on top of other stuff — that's fine, I'll just take it as a juicy bone. But in this case I feel like it's artificially inflating complexity and at the level it was placed, it's measuring weight of balls more than that specific skill.

      It's a valid contest problem, but there ought to be a tendency to use them only rarely and even then preferrably at higher level.

  • »
    »
    10 минут назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    For my understanding--you don't even need the fact that some removal has lower reduction than the sum of all previous reductions right? Just having a lower reduction than the last removal is sufficient (implying that the sequence of reductions is just non-increasing)?

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."

Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??

  • »
    »
    5 часов назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

D was a good implementation of use of heaps

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My D TLE'd due to hash collisions smh....

»
4 часа назад, # |
Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I had the same structure for my dp but couldn't find the transition... Thank you!!

    • »
      »
      »
      3 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      same, I panicked, went into random ideas and stopped seeing things clearly during the contest :(

      Lesson learnt!

»
3 часа назад, # |
Rev. 2   Проголосовать: нравится -7 Проголосовать: не нравится

Hi, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Suppose you have 2^30 in query

    You query 2^29 2 times Than for each of them. In total you query 2^28 4 times

    Similarly you reach 1 2^30 times

»
3 часа назад, # |
Rev. 9   Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell me why in QUESTION-D my 302152821 is causing sigterm in testcase (Given in question) :

1
1 1
1
1000000000

while other than this everything is fine.

In other submission 302153118 I just added only one line (Mentioned in comment) and it got accepted.

»
2 часа назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem D, I'm trying to understand the time complexity of this solution. This straightforward DFS using std::map runs in 300ms:

https://codeforces.me/contest/2061/submission/302153025

For each target x, I appear to be doing 2^(log x) recursive calls in worst case, each with a logn map operation, but the actual runtime looks much better. What am I missing? Is it just the testcases are not strict enough?

Also, I don't understand why is std::map is much faster here? If I use unordered_map with exact same solution it TLE's.

Edit: ok. nvm i got it. amount of recursive calls is just bound by the input size it gets consumed. and unordered_map is just targeted for collisions, works fine with a custom hash function.