Everule's blog

By Everule, history, 22 months ago, In English

A. Monsters (easy version)

Notice that this is equivalent to maximizing the damage done by the operation of type $$$2$$$. Let us consider the case when there are 2 monsters with health $$$h$$$, and $$$h+k$$$, and no monsters with health in $$$(h, h+k)$$$. Then reducing the monster with health $$$h + k$$$ to $$$h + 1$$$ cannot make the final outcome worse. You can prove it by noticing that the operation of type $$$2$$$ can only happen $$$h$$$ times if the health of the monster is above $$$h + 1$$$. Now we get monsters with health in such a way, that there exists $$$H$$$, where there is monster with health from $$$1$$$ to $$$H$$$. Then one move of operation $$$2$$$ can kill the remaining monsters. So you only need to simulate the above process, and find out how many moves of type $$$1$$$ were required to do it.

B. Letter Exchange

Let us consider a graph with $$$3$$$ nodes $$$w, i, n$$$. For a word of type $$$wii$$$ add an edge from $$$i$$$ to $$$n$$$. For a word of type of $$$www$$$ and an edge from $$$w$$$ to $$$i$$$ and $$$w$$$ to $$$n$$$. This represents the swaps it needs to be sorted. Now instead of swapping letters between words, we swap letters between edges. Edges $$$a \to b$$$ and $$$b \to a$$$ can be swapped to make $$$a \to a$$$ and $$$b \to b$$$. Notice self edges are redundant, so we can drop them. For edges $$$a \to b$$$ and $$$b \to c$$$, we get $$$a \to c$$$ and $$$b \to b$$$. The rest of the possibilities are symmetric versions of these. Therefore to swap these in the minimum number of moves, we should maximize the number of times we use the first operation, that removed 2 edges at once, or minimize the number of times we do the operation of type 2.

We claim that the minimum number of times we need to use the operation of type 2 is the absolute value of the cyclic flow in the graph. The cyclic flow is defined as following. Let the flow from $$$a \to b$$$ be the difference of edges from $$$a \to b$$$ and $$$b \to a$$$. Then the absolute value of this equal for all $$$a, b$$$. Notice this by the fact that the indegree and outdegree of each node must be equal. Call this value the cyclic flow.

The operation of type $$$1$$$ doesn't change the cyclic flow, and operation of type 2 changes cyclic flow by 1 in any direction (Notice the signed cyclic flow is additive). Therefore we must use the operation of type $$$2$$$ at least that many times. This is also constructible, since just doing operations of type 1 until its possible, then using operations of type 2 to get rid of the cycles is valid strategy.

Therefore you just need to simulate doing it.

C. Monsters (hard version)

Consider doing the process in reverse. From the easy version, we deduce that there exists $$$H > 0$$$ s.t. there is some monster with health $$$h$$$ for all $$$1 \le h \le H$$$. This effectively forms a matching between $$$[1, H]$$$ and some subset of monsters, such that the health of the monster is larger, and the sum of differences is minimised. Additionally, the health of all monsters is less than $$$H$$$. We only need to keep track of the monster mapped to health $$$h$$$ at each point of time.

Let us prove the assignment with larger $$$H$$$ is strictly optimal. Let us consider two solutions, one with $$$H = x$$$, and one with $$$H = x + 1$$$. If it is possible to not use the largest element for $$$H$$$, we will not use it. We can show this by direct exchange argument, by replacing the matching. If there exists solution with $$$H = x + 1$$$, it is possible to remove any element greater than $$$x$$$ and still have matching for $$$H = x$$$. Then we could simply have maximum be at $$$x + 1$$$, and produce strictly better solution. Therefore $$$H$$$ is always maximum possible. It is also then obvious, that if there exists monster with $$$h > H$$$, the current solution is not optimal.

Consider doing the solution for the entire array and removing one by one. Then if the removed monster wasn't already matched the solution is still optimal. Let us consider removing the monster was matched with $$$h$$$. Then we can retain the value of $$$H$$$ if there exists replacement for it. This is true because there are $$$H - h$$$ monsters with more health than $$$h$$$, since all are in matching if there is no replacement, but we need $$$H - h + 1$$$. So checking if the value of $$$H$$$ is still same is checking if there exists replacement.

Then we claim, that picking the smallest replacement, and not changing the rest of the matching produces an optimal solution.

Lemma : If there exists a solution, there exists a sorted solution with respect to $$$[1, H]$$$. We can prove this by exchange argument on every inversion. Therefore the following solution is optimal, when considering the set of sorted matchings. Go from $$$[1, H]$$$. Pick the smallest available element at each step.

We will now find that with the above two algorithms, the set of chosen elements in both the matchings will be equal, and therefore will have equal cost.

Let us consider the case where $$$x$$$ is matched to a lower cost than the sorted solution. Then any replacement for this will work for optimal too, and since smaller replacement means better than optimal, it will do as good as optimal.

If it is matched to higher cost than optimal, Let us consider $$$k_1$$$ and $$$k_2$$$, there must be no excess elements between $$$k_1$$$ and $$$k_2$$$, otherwise you could replace $$$x$$$ by it, and the solution is not optimal.

If there is no replacement, then we can simply replace the monster matched to $$$h$$$ with the monster matched to $$$H$$$, and decrement $$$H$$$.

Therefore we can deduce that the set of matched elements is equal, and therefore the solutions are equivalent, which solves the problem.

Solution — Courtesy of jeroenodb

D. Wooden Spoon

We can find the Wooden spoon uniquely by this process. Create a binary tree, where the leaves are the players. We consider the maximum wins instead of the minimum for algebraic convenience. The winner of node is the maximum of its children. Start at the root, and go to the loser of the game. Do this until you reach a leaf node. Now we need some way to find the probability of each person being the final leaf of this process. Let us consider the dynamic programming relation $$$dp[d][s]$$$, as the probability of the person with skill $$$s$$$ being the node we reach after $$$d$$$ iterations of the above process.

We also claim the following invariant $$$:$$$ The probability of any person $$$x < s$$$ being in the subtree of $$$s$$$ is equiprobable.

Let us now consider finding the probability of transition from $$$dp[d+1][x]$$$ from $$$dp[d][s]$$$. Let $$$sz(d)$$$ be the size of the subtree after doing $$$d$$$ steps of the process, or equivalently $$$2^{n - d}$$$. Then the number of way to split the subtree of $$$s$$$ into two sets is $$$\binom{s - 1}{sz(d+1)}\binom{s - 1}{sz(d+1) - 1}$$$. The number of ways to split so that the loser of one subtree is $$$x$$$ is $$$\binom{x - 1}{sz(d+1) - 1}\binom{s - sz(d+1) - 1}{sz(d+1) - 1}$$$. This transition also satisfies our previous invariant, therefore its a valid dynamic programming solution. Notice that each factor in the probability transition depends only on $$$x$$$ or only on $$$s$$$. So we can use prefix sums with some multiplication to calculate it efficiently.

F. Minimums or Medians

We basically need to find the set of constructible strings. For this we should first analyze how the median and minimum operations interplay. Let us imagine 2 pointers at the 2 medians of the array at all times. If we do a minimum operation, both these pointers move forward to the next element. If we do a median operation, they delete the current element, and move outwards. This also implies the larger element deleted by any median operation is monotonically increasing.

With this analysis we can make our first claim. For any constructible string, the smallest remaining number, if it exists will always be odd. This is trivial to see by invariant. At each point the array will satisfy $$$a[i] \equiv i \mod 2$$$.

Now let's go for a harder claim. For any constructible string with smallest remaining number being $$$2t + 1$$$, it is possible to construct it with exactly $$$t$$$ minimum operations.

If the median acts on some pair $$$(x, y)$$$, then there must be no elements currently between $$$x$$$ and $$$y$$$. Therefore we notice that either $$$max(x, y) < 2t + 1$$$, or $$$min(x, y) > 2t + 1$$$. Since the larger element is monotonically increasing, we also know that only some prefix of median operations satisfy $$$max(x, y) < 2t + 1$$$. We replace all median operations with $$$max(x, y) < 2t + 1$$$ with minimum operations. Notice that on the first median operation, the number of deleted elements to the left of $$$2t + 1$$$ is equal. Therefore this claim is true.

Now its quite natural to fix $$$t$$$, the number of minimum operations we did. Then all median operations act on numbers larger than $$$2t + 1$$$. Let us now look what the possible median operation actions look like. Notice that the right midpoint doesn't just increase every move, it increases by exactly one. Therefore we can claim the largest deletable element is $$$n + k$$$.

Let's go for the next claim. Let $$$b$$$ be the number of elements in $$$[1, n]$$$ deleted with median operations. The set of deleted elements is $$$[n-b+1, n]$$$, and additionally, $$$[n+1, n+b]$$$ must also be deleted. This requires the same observation of medians not "skipping" numbers, except now we look at the gap between $$$n$$$ and $$$n+1$$$. The right endpoint is always larger than $$$n+1$$$, and the left endpoint here is less than $$$n$$$. Therefore at each point we can only delete the largest element less than $$$n$$$ and the smallest element larger than $$$n+1$$$.

Now let's additionally fix the balance $$$b$$$. Now consider the numbers $$$[n+b+1, n+k]$$$.

We claim, you can delete any subset of these numbers, as long as the length of every deleted segment is even. We first delete the $$$[n-b+1, n+b]$$$ trivially. Then we do a minimum operation and have the end points of $$$(n+b+1, n+b+2)$$$. From here we do the simple algorithm, of go to the midpoint of the next even length deleted segment, and do operations until that segment is deleted. It is quite easy to see that this will delete the necessary segments, and is possible since the higher end of the median can reach $$$n + k$$$.

The next step only involve equation bashing, but the high level idea is as follows. If you have $$$x$$$ elements and you want $$$2s$$$ elements deleted, then the number of ways to do it is $$$\binom{x - s}{s}$$$. When you add the combinatorial terms over all pairs $$$t, b$$$ such that answer is $$$\binom{x}{y}$$$ you might notice the possible $$$y$$$ for each $$$x$$$ forms contigous range. Also as you increase $$$x$$$, this ranges changes by at most 1 in either direction. Since $$$\sum \binom{x + i}{y}$$$ is easier to compute than $$$\sum \binom{x}{i}$$$, we can kindof transpose the sum.

I think there is easier interpretation of this if you loop $$$b$$$ before $$$t$$$. If anyone has such interpretation feel free to share it in comments.

If anyone wants to write down their solutions to $$$E$$$, I would be happy to add it to the blog.

  • Vote: I like it
  • +121
  • Vote: I do not like it

»
22 months ago, # |
  Vote: I like it +45 Vote: I do not like it

Here I wrote in brief solutions of Div1 A-D

TL; DR for 1785C - Monsters (hard version)
The idea is we remove all the elements which are not at all helpful. E.g. — Before applying operation 2 if there are two elements of the same height, then we can just delete one of them without any effect on the rest of the elements. Such a monster exists only if the monster's height is $$$i$$$ and there are more $$$i$$$ monsters with height $$$\le i$$$. Now, we do these operations by maintaining a sorted list (let's call this list $$$b$$$) of the height of monsters, insert elements in this list and remove such useless monsters.

Pseudo code

These operations can be done in better time complexity using atcoder (lazy) segment tree, which supports binary search on the segment tree. Reference implementation — 192301905

»
22 months ago, # |
  Vote: I like it +20 Vote: I do not like it

In Editorial for C:

If there exists solution with $$$H=x+1$$$, it is possible to remove any element greater than $$$x$$$ and still have matching.

I think there should be $$$H=x$$$.

  • »
    »
    22 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    You're right, it's ambigous. If there exists solution with $$$H = x + 1$$$, then you can remove any element $$$\ge x$$$ from from the set of monsters, and still have matching for $$$H = x$$$.

»
22 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I see some solutions for D use FFT? Can someone explain how they work?

»
22 months ago, # |
  Vote: I like it +63 Vote: I do not like it

Solution for E as follows.

Consider the following automaton:

Automaton

First iterate over a $$$S\in { 1,2,3,4 }$$$.

Define state $$$f_{i,j,s_1,s_2,s_3,s_4}$$$ as the number of the strings that:

  • Have length $$$i$$$ and meet constraints of the first $$$i$$$ characters;
  • When ran on the automaton for once, goes:
    • from $$$1$$$ to $$$s_1$$$;
    • from $$$2$$$ to $$$s_2$$$;
    • from $$$3$$$ to $$$s_3$$$;
    • from $$$4$$$ to $$$s_4$$$.
  • When starting from vertexes in $$$S$$$, the weights of the edges it passes adds up to $$$j$$$.

Transition is just iterate the next character and go along the four edges.

After we calculate this DP, we consider state $$$f_{n,j,s_1,s_2,s_3,s_4}$$$. Make a graph that contain four edges in the form of $$$i\rightarrow s_i$$$. As the problem considers a limit for an infinitely large $$$n$$$, only the cycle that 1 will run into matters. Let $$$S'$$$ be the set of vertexes on the cycle. Only states satisfying $$$S'=S$$$ matters. Then, states for $$$j=0$$$ are added to ties, $$$j>0$$$ are added to Alice, and $$$j<0$$$ are added to Bob.

Total complexity is $$$O(n^2\times 4^4\times 2^4)$$$, and can pass with some implementation tricks + 64 bit compiler (like, precompute $$$(j,s_1,s_2,s_3,s_4)+\texttt{a/b}\rightarrow (j',t_1,t_2,t_3,t_4)$$$ for all $$$S$$$).

»
22 months ago, # |
Rev. 2   Vote: I like it +16 Vote: I do not like it

My thinking behind C.

the way i did it was:

$$$1.$$$ We need to create a sorted sequence where max difference is 1 between any adj elements

$$$2.$$$ Removing elements that are repeat in desired array we get an array of 1,2,3,... size

$$$3.$$$ Assume current sum is s and amount of elements is x the ans is sum — (x(x+1))/2

$$$4.$$$ ok now is the hardest part, there are 2 possible things that can happen in every operation

$$$4.1$$$ We add the element a_i and size increases by one

$$$4.2$$$ We add element a_i and remove some redundant element, size says the same

$$$5$$$. How do we check which one we do? Everytime we add an element x, every element ahead of it gets shifted up one. How do we track this lazy segment tree. Imagine an array of 1,2,3..., n. Everytime we add an element we subtract 1, from all elements [x,n]. If any of these numbers becomes less than 1, this means there is some redundant element because it is unoptimal to change x+1 elements to make a sequence of 1,2,3.., x. So we just find the first instance of a negative number and we remove that number by range adding 1 to [x,n]. How can we find this? Use a min segment tree and binary search until we find the first interval where it is negative. If there are none, we do operation 4a, otherwise we do operation 4b. Then we can update the sum with these new changed values.

»
22 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I don't understand what "from dp[d+1][x] from dp[d][s]" means in D's solution.... Does it mean "from dp[d+1][x] to dp[d][s]"?

  • »
    »
    22 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Never mind. The whole solution is way above my head. I give up

»
22 months ago, # |
Rev. 3   Vote: I like it +10 Vote: I do not like it

There is a little mistake that on the 8th row of editorial for problem D ,the $$$C(s-1,sz(d+1))*C(s-1,sz(d+1)-1)$$$

maybe should be $$$C(s-1,sz(d+1))*C(s-1-sz(d+1),sz(d+1)-1)$$$, to select S itself and $$$sz(d+1)-1$$$ other elements from the rest.

Anyway,thank you for your excellent editorial!

»
22 months ago, # |
Rev. 2   Vote: I like it +28 Vote: I do not like it

E: Let's fix/substitute the first 2 characters and make an assumption: if the first game in some repetition of (already fully substituted) $$$S$$$ starts at position $$$i$$$ ($$$0 \le i \le 2$$$), then in the next repetition of $$$S$$$, this starting position is $$$p_i$$$. There's at most $$$4 \cdot 27$$$ such choices, we do a DP for each.

The DP states are "current position, current score diff, states $$$s_0, s_1, s_2$$$ of currently open games if we started from positions $$$0, 1, 2$$$". DP transitions are just straightforward simulation. Once we reach position $$$N$$$, we can use the first 2 characters to determine for which states the assumption above holds, and only consider those.

Consider the graph $$$i \rightarrow p_i$$$. We start from $$$i=0$$$ and quickly reach a small cycle where the same sequence of $$$O(N)$$$ games repeats forever. All games won during the DP for starting positions in this cycle count towards the score difference, the rest doesn't; we know this cycle a priori so we can just calculate the score diff during DP-simulation. There are $$$O(N^2)$$$ states and transitions, with a large constant, but processing them has a small constant, so it works in time.

D: Hue hue convolutions with inverses of factorials.