Блог пользователя maroonrk

Автор maroonrk, история, 4 года назад, По-английски

We will hold AtCoder Grand Contest 049. This contest counts for GP30 scores.

The point values will be 400 — 600 — 800 — 1000 — 1600 — 2400.

Please note that the contest duration is unusually long.

We are looking forward to your participation!

  • Проголосовать: нравится
  • +246
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится -59 Проголосовать: не нравится

Is this the "WTF won't be in foreseeable future so we're giving you a brutal contest to make up for it"? Complete with deceptive scores? 3 hours and 20 minutes is a LOT.

»
4 года назад, # |
  Проголосовать: нравится -83 Проголосовать: не нравится

Just 2020 things

»
4 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Up. Contest starts in 10 minutes.

»
4 года назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

Does D solution require advanced things like FFT or some convolutions or is it just DP / observations / combinatorics / inclusion-exclusion etc.?

I ask this question because I want to be sure I have enough knowledge to solve it on my own.

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +12 Проголосовать: не нравится

    I solve it just by knapsack DP.

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится -27 Проголосовать: не нравится

    Code is short and simple knapsack-like dp, but it is quite challenging to come up with it

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится -23 Проголосовать: не нравится

      It didn't seem that hard to me. When you pick the range of minima (which seems like a good choice since we could go below zero otherwise) $$$A_{l,r}$$$, you get independent problems for $$$m = \sum_{i=1}^n \frac{i(i+1)}{2} C_i$$$ where we're counting the independent choices of $$$C_i \ge 0$$$, $$$C_n \gt 0$$$. We're summing up our DP over all choices of $$$l$$$, $$$r$$$, $$$m_{left}$$$, $$$m_{right}$$$ such that $$$N$$$ divides $$$M-m_{left}-m_{right}$$$ (since that determines the minimum value) and there are a few more observations like that $$$l$$$ and $$$r$$$ are close to the border and we can incorporate the "$$$N$$$ divides" part into our DP in the same way as we computed it.

      Quite straightforward compared to A.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +58 Проголосовать: не нравится

UDGIODFGYOGBFYOs[ ntu450(*%$^57430

I sent almost correct solution to E when it was still hour to go. I found bug in the very last minute and lacked literally a few seconds. Already has AC ;__;. To be more precise I sent version without bug in time, but it was version with a few reverted optimizations that still got TLE and reverting them took me these few seconds more.

»
4 года назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

For me A was more difficult than B.

»
4 года назад, # |
  Проголосовать: нравится +43 Проголосовать: не нравится

This A, lol. I have already seen a few contests where key idea from here was key idea to one of harder problems in the set. Just AtCoder things

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Sorry may I ask for a more detailed proof of the technique used in the editorial of A? (i.e. how to transform the expectation value into that thing) Seems very nontrivial to me but should have been very trivial(?)

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I just finished doing this problem, here is how I formally proved everything.

      You want to prove: the expected value of the random variable $$$X$$$ which outputs the number of steps in a given operation (which is an occurrence) is the sum of probability of choosing each vertex, that is, for each vertex $$$i$$$, the probability that $$$i$$$ is in an occurrence.

      $$$E[X(w)]=\sum_{w\in S}P(w)X(w)$$$, where $$$w$$$ is an occurrence and $$$S$$$ is the sample space.

      Lets say that $$$w=v_1 v_2 ... v_k$$$, then $$$X(w)=k$$$, so that $$$P(w)X(w)=kP(w)=P(w)+P(w)+...+P(w)$$$.

      Then for each vertex in an occurrence, we have $$$P(w)$$$ term for it in each term of the summation. If we rearrange these terms, we can get $$$E[X(w)]=\sum_{w\in S}P(w)X(w)=\sum_{w: 1 \in w} P(w)+\sum_{w: 2 \in w} P(w)+..+\sum_{w: n \in w} P(w)$$$.

      (by collecting all terms of same vertices together)

      So by $$$3^{rd}$$$ axiom of probability (since all occurrences are disjoint), $$$\sum_{w: j \in w} P(w)$$$ is just the probability that $$$j$$$ is in an occurrence, which is exactly what we set out to prove.

»
4 года назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

At first, I was solving a harder version of C where we can't use values other than $$$A_1, \ldots, A_N$$$ and $$$10^{100}$$$. The ideas are the same, but it took me over an hour to figure out how exactly it's different. Derp.