maroonrk's blog

By maroonrk, history, 4 years ago, In English

We will hold AtCoder Grand Contest 049. This contest counts for GP30 scores.

The point values will be 400 — 600 — 800 — 1000 — 1600 — 2400.

Please note that the contest duration is unusually long.

We are looking forward to your participation!

  • Vote: I like it
  • +246
  • Vote: I do not like it

»
4 years ago, # |
  Vote: I like it -59 Vote: I do not like it

Is this the "WTF won't be in foreseeable future so we're giving you a brutal contest to make up for it"? Complete with deceptive scores? 3 hours and 20 minutes is a LOT.

»
4 years ago, # |
  Vote: I like it -83 Vote: I do not like it

Just 2020 things

»
4 years ago, # |
  Vote: I like it +18 Vote: I do not like it

Up. Contest starts in 10 minutes.

»
4 years ago, # |
  Vote: I like it +14 Vote: I do not like it

Does D solution require advanced things like FFT or some convolutions or is it just DP / observations / combinatorics / inclusion-exclusion etc.?

I ask this question because I want to be sure I have enough knowledge to solve it on my own.

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +12 Vote: I do not like it

    I solve it just by knapsack DP.

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it -27 Vote: I do not like it

    Code is short and simple knapsack-like dp, but it is quite challenging to come up with it

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -23 Vote: I do not like it

      It didn't seem that hard to me. When you pick the range of minima (which seems like a good choice since we could go below zero otherwise) $$$A_{l,r}$$$, you get independent problems for $$$m = \sum_{i=1}^n \frac{i(i+1)}{2} C_i$$$ where we're counting the independent choices of $$$C_i \ge 0$$$, $$$C_n \gt 0$$$. We're summing up our DP over all choices of $$$l$$$, $$$r$$$, $$$m_{left}$$$, $$$m_{right}$$$ such that $$$N$$$ divides $$$M-m_{left}-m_{right}$$$ (since that determines the minimum value) and there are a few more observations like that $$$l$$$ and $$$r$$$ are close to the border and we can incorporate the "$$$N$$$ divides" part into our DP in the same way as we computed it.

      Quite straightforward compared to A.

»
4 years ago, # |
Rev. 2   Vote: I like it +58 Vote: I do not like it

UDGIODFGYOGBFYOs[ ntu450(*%$^57430

I sent almost correct solution to E when it was still hour to go. I found bug in the very last minute and lacked literally a few seconds. Already has AC ;__;. To be more precise I sent version without bug in time, but it was version with a few reverted optimizations that still got TLE and reverting them took me these few seconds more.

»
4 years ago, # |
  Vote: I like it +5 Vote: I do not like it

For me A was more difficult than B.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +14 Vote: I do not like it

    For me A was more difficult than BCD.

»
4 years ago, # |
  Vote: I like it +43 Vote: I do not like it

This A, lol. I have already seen a few contests where key idea from here was key idea to one of harder problems in the set. Just AtCoder things

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Sorry may I ask for a more detailed proof of the technique used in the editorial of A? (i.e. how to transform the expectation value into that thing) Seems very nontrivial to me but should have been very trivial(?)

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I just finished doing this problem, here is how I formally proved everything.

      You want to prove: the expected value of the random variable $$$X$$$ which outputs the number of steps in a given operation (which is an occurrence) is the sum of probability of choosing each vertex, that is, for each vertex $$$i$$$, the probability that $$$i$$$ is in an occurrence.

      $$$E[X(w)]=\sum_{w\in S}P(w)X(w)$$$, where $$$w$$$ is an occurrence and $$$S$$$ is the sample space.

      Lets say that $$$w=v_1 v_2 ... v_k$$$, then $$$X(w)=k$$$, so that $$$P(w)X(w)=kP(w)=P(w)+P(w)+...+P(w)$$$.

      Then for each vertex in an occurrence, we have $$$P(w)$$$ term for it in each term of the summation. If we rearrange these terms, we can get $$$E[X(w)]=\sum_{w\in S}P(w)X(w)=\sum_{w: 1 \in w} P(w)+\sum_{w: 2 \in w} P(w)+..+\sum_{w: n \in w} P(w)$$$.

      (by collecting all terms of same vertices together)

      So by $$$3^{rd}$$$ axiom of probability (since all occurrences are disjoint), $$$\sum_{w: j \in w} P(w)$$$ is just the probability that $$$j$$$ is in an occurrence, which is exactly what we set out to prove.

»
4 years ago, # |
  Vote: I like it +10 Vote: I do not like it

At first, I was solving a harder version of C where we can't use values other than $$$A_1, \ldots, A_N$$$ and $$$10^{100}$$$. The ideas are the same, but it took me over an hour to figure out how exactly it's different. Derp.