Блог пользователя WeakestTopology

Автор WeakestTopology, 5 недель назад, По-английски

In this blog, we build on the introduction from my previous one and then we'll see some tricks I used to solve PIBO2. For this, you need to read at least the first section there, the "Clacking Balls" section and probability stuff are not required.

Thanks bashkort for the Month of Blog Posts challenge!

The basis of falling factorials

The $$$k$$$-th falling factorial is the polynomial defined is as

$$$\displaystyle (x)_k = x(x - 1)(x - 2) \cdots (x - (k - 1)), $$$

with $$$(x)_0 = 1$$$. It has degree $$$k$$$, so the falling factorials $$$1, x, (x)_2, \dots, (x)_k, \dots$$$ form a basis over the vector space of polynomials, like the monomial basis $$$1, x, x^2, \dots, x^k, \dots$$$ does. We note that

$$$\displaystyle \frac{(n)_k}{k!} = \binom{n}{k}, $$$

for non-negative integers $$$n$$$ and $$$k$$$. Hence the binomial can also be seen as a polynomial in $$$n$$$ and we can replace $$$n$$$ by an indeterminate variable $$$x$$$. From the combinatorial identity

$$$\displaystyle \binom{n + 1}{k} = \binom{n}{k} + \binom{n}{k - 1}, $$$

we get

$$$\displaystyle \Delta \binom{x}{k} = \binom{x}{k - 1} $$$

and thus

$$$\displaystyle \Delta (x)_k = k! \Delta \binom{x}{k} = k! \binom{x}{k - 1} = k (x)_{k-1}. $$$

Hence $$$\Delta$$$ acts on $$$(x)_k$$$ like $$$\frac{d}{dx}$$$ acts on $$$x^k$$$.

One the first things one can do when given a linear operator is finding a suitable basis for it, one that makes it easier to understand. And that's what we just did for $$$\Delta$$$, as a linear operator on the vector space of polynomials. This basis allows us to try to "invert" $$$\Delta$$$. Note that $$$\Delta$$$ is not invertible since $$$\Delta 1 = 0$$$. But if $$$\Delta p(x) = (x)_k$$$, then we know that $$$p(x) = \frac{1}{k + 1} (x)_{k+1} + C$$$, where $$$C$$$ is some constant. This is similar to integration.

A quick application

Consider a random walk over a path with $$$n$$$ vertices. Let's say we start on vertex $$$1$$$ and each time we go to an adjacent vertex with equal probability. That is, if we are on vertex $$$x$$$ with $$$1 < x < n$$$, then we go to either $$$x - 1$$$ or $$$x + 1$$$, each with probability 1/2. If we are at $$$1$$$, we automatically go to vertex $$$2$$$. Let's calculate the expected number of steps until we reach vertex $$$n$$$ for the first time.

Let $$$E_k$$$ denote the expected number of steps until we reach vertex $$$n$$$ if we start at vertex $$$k$$$. Then $$$E_1 = 1 + E_2$$$, $$$E_n = 0$$$ and

$$$\displaystyle E_x = 1 + \frac{1}{2} E_{x-1} + \frac{1}{2} E_{x + 1}. $$$

for $$$1 < x < n$$$. Rearranging these, we get a set of equations, like in the previous blog:

$$$\displaystyle \begin{align*} \Delta^2 E_x &= -2,\\ \Delta E_1 &= -1,\\ E_n &= 0. \end{align*} $$$

"Integrating" once, we get $$$\Delta E_x = -2(x)_1 + C = -2x + C$$$, for some constant $$$C$$$. The second equation yields $$$C = 1$$$. "Integrating" again, we get

$$$\displaystyle E_x = -(x)_2 + x + C' = -x^2 +2x + C', $$$

for another constant $$$C'$$$, which equals $$$n^2 - 2n$$$ by the third equation in the set.

PIBO2

Now let's solve PIBO2.

Since we're working with derivatives, let's try to compute $$$\Delta Pib(n)$$$ with the recurrence and see what we find:

$$$\displaystyle \begin{align*} \Delta Pib(n) &= Pib(n + 1) - Pib(n)\\ &= \left( Pib(n) + Pib(n - 1) + P(n + 1) \right) - \left( Pib(n - 1) + Pib(n - 2) + P(n) \right)\\ &= \Delta Pib(n - 1) + \Delta Pib(n - 2) + \Delta P(n). \end{align*} $$$

This is the exact same recurrence but with $$$Pib$$$ and $$$P$$$ replaced by $$$\Delta Pib$$$ and $$$\Delta P$$$ respectively. So we can iterate! This looks useful, because we reduced the degree of the polynomial by $$$1$$$. Iterating it $$$d$$$ times more, we get

$$$\displaystyle \begin{align*} \Delta^{d + 1} Pib(n) &= \Delta^{d+1} Pib(n - 1) + \Delta^{d + 1} Pib(n - 2) + \Delta^{d + 1} P(n)\\ &= \Delta^{d+1} Pib(n - 1) + \Delta^{d + 1} Pib(n - 2), \end{align*} $$$

since $$$P$$$ has degree $$$d$$$ and $$$\Delta^{d + 1} P = 0$$$. We got rid of the polynomial here, which seems useful. We also got a Fibonacci-like recurrence, which is very simple for us to solve with matrix exponentiation, given $$$\Delta^{d+1} Pib(0)$$$ and $$$\Delta^{d + 1} Pib(1)$$$.

On another hand, just rearranging the recurrence equation from the problem statement, we get

$$$\displaystyle Pib(n - 2) = \Delta Pib(n - 1) - P(n), $$$

and, changing variables,

$$$\displaystyle Pib(n) = \Delta Pib(n + 1) - P(n + 2). $$$

We can iterate this, plugging this same formula to compute $$$\Delta Pib(n + 1)$$$ above, until we reach $$$\Delta^{d + 1}Pib$$$:

$$$\displaystyle \begin{align*} Pib(n) &= \Delta \left( \Delta Pib(n + 2) - P(n + 3) \right)- P(n + 2)\\ &= \Delta^2 Pib(n + 2) - P(n + 2) - \Delta Pib(n + 3)\\ &\ \vdots\\ &= \Delta^{d+1} Pib(n + d + 1) - \sum_{k=0}^{d + 1} \Delta^{k} P(n + 2 + k)\\ &= \Delta^{d+1} Pib(n + d + 1) - \sum_{k=0}^d \Delta^{k} P(n + 2 + k). \end{align*} $$$

Now, as we've seen above, $$$\Delta^{d+1} Pib(n + d + 1)$$$ can be computed with matrix exponentiation, given the values $$$\Delta^{d+1} Pib(0)$$$ and $$$\Delta^{d + 1} Pib(1)$$$. These values, along with the terms in in the summation above, can be computed directly from the definitions in $$$O(d^2)$$$. We simply need to compute the required values of $$$Pib$$$ and $$$P$$$ and to successively use a function that takes an array $$$(a_1, \dots, a_m)$$$ and returns the array of differences $$$(a_2 - a_1, \dots, a_m - a_{m - 1})$$$ (it's 1 size smaller).

We remark that there is another way to compute the summation above, also in $$$O(d^2)$$$, which is more thematic and requires less thinking about indices. Write $$$P$$$ in the basis of falling factorials. To do that, first compute $$$(x)_d$$$. Since the falling factorials are monic (also remember we're working over $$$\mathbb{Z}_{1111111111}$$$), the coefficient of $$$(x)_d$$$ is simply the leading coefficient of $$$P$$$, so subtracting it from $$$P$$$ we get a polynomial of degree $$$d - 1$$$. Computing $$$(x)_{d-1}$$$ from $$$(x)_d$$$ can be done in $$$O(d)$$$ with naive division. With $$$P$$$ written in the basis of falling factorials, we can go to $$$\Delta^{k+1}P$$$ from $$$\Delta^k P$$$ in $$$O(d)$$$ (we saw how $$$\Delta$$$ acts on $$$(x)_k$$$). Evaluating a polynomial written in the basis of falling factorials can be done also in $$$O(d)$$$ for each point.

In the rest of this blog, we'll see more stuff to solve this problem in $$$O(d \log^2 d)$$$. Some familiarity with generating functions will be useful.

Evaluating, interpolating and change of basis

At this point we'll assume we're working over a field, as opposed to the ring $$$\mathbb{Z}_{1111111111}$$$. Given $$$d + 1$$$ values $$$y_0, y_1, \dots, y_d$$$, we know there exists a unique polynomial $$$P$$$ of degree $$$d$$$ such that $$$P(i) = y_i$$$ for each $$$i = 0, 1, \dots, d$$$. Evaluating from and interpolating to the monomial basis is well known. In this section, we'll learn how to do that for the basis of falling factorials. As a result, we'll also have a way of converting between these two bases.

Evaluation looks simpler, so let's start with that. Let's suppose we have a polynomial $$$P$$$ written like

$$$\displaystyle P(x) = \sum_{k} \alpha_k (x)_k $$$

and we want to compute $$$P(0), P(1), \dots, P(d)$$$. Well, for a point $$$m$$$,

$$$\displaystyle \begin{align*} P(m) &= \sum_k \alpha_k (m)_k\\ &= \sum_{k=0}^m \alpha_k (m)_k\\ &= \sum_{k=0}^m \alpha_k \frac{m!}{(m - k)!}, \end{align*} $$$

that is,

$$$\displaystyle \frac{P(m)}{m!} = \sum_{k = 0}^m \alpha_k \frac{1}{(m-k)!}. $$$

One can recognize the right hand side as a convolution. What we get is that the exponential generating function of the $$$P(m)$$$ equals a simple product:

$$$\displaystyle \sum_m P(m)\frac{x^m}{m!} = \left( \sum_k \frac{x^k}{k!} \right) \left( \sum_k \alpha_k x^k \right) = \exp(x) \left( \sum_k \alpha_k x^k \right). $$$

Interpolation, a.k.a. inverting this, is just as simple. Just multiply both sides by $$$\exp(-x)$$$ and we can get the coefficients $$$\alpha_k$$$ from the values $$$P(m)$$$. So both operations can be done in $$$O(d \log d)$$$.

Shifting

Prerequisite: OGFs, EGFs, differentiation and Taylor shifts.

Given a polynomial $$$P$$$ written in the basis of falling factorials, how to compute $$$P(x + c)$$$? Note that

$$$\displaystyle P(x + 1) = P(x) + \Delta P(x) = (1 + \Delta)P(x). $$$

Iterating this, for integer $$$c$$$, we get

$$$\displaystyle P(x + c) = (1 + \Delta)^c P(x) = \sum_{k=0}^c \binom{c}{k} \Delta^k P(x). $$$

If $$$P$$$ has degree $$$d$$$, then we only need the first $$$d + 1$$$ terms in the sum above, the others are zero. Note that both sides are polynomials in $$$c$$$, so we can extend this for non-integer values of $$$c$$$ as well.

How to apply a polynomial (or power series) of $$$\Delta$$$ to $$$P$$$? The linked prerequisite teaches how to do that for the standard derivative operator $$$D$$$. We know that $$$\Delta$$$ acts on $$$(x)_k$$$ like $$$D$$$ acts $$$x^k$$$. So just replace $$$P(x) = \sum_k \alpha_k (x)_k$$$ by $$$\sum \alpha_k x_k$$$ and $$$\sum_k \binom{c}{k} \Delta^k$$$ by $$$\sum_k \binom{c}{k} D^k$$$ and apply what we already know to get the coefficients that we want.

With this section and the previous one, you should be able to solve Shift of Sampling Points of Polynomial from Library Checker. Also checkout Pow of Formal Power Series (Sparse), as it can speed the computation of $$$(1 + \Delta)^c$$$ (I learned how to solve it from this blog).

Explicit formula for $$$\Delta^k f$$$

Let's say we have a function $$$f$$$, not necessarily a polynomial. We want to compute $$$\Delta^k f$$$. To get an intuition, if we try manually computing the first few values,

$$$\displaystyle \begin{align*} \Delta^0 f(x) &= f(x)\\ \Delta^1 f(x) &= f(x + 1) - f(x)\\ \Delta^2 f(x) &= f(x + 2) - 2f(x + 1) + f(x)\\ \Delta^3 f(x) &= f(x + 3) - 3f(x + 2) + 3f(x + 1) - f(x), \end{align*} $$$

we see that the coefficients look like the beginning of Pascal's triangle, with some alternating signs.

We can prove by induction on $$$k$$$ that

$$$\displaystyle \Delta^k f(x) = \sum_{i=0}^k (-1)^{k - i} \binom{k}{i} f(x + i). $$$

PIBO2, but faster

At several points above, we assumed we were working over a field instead of simply a ring. The prime factorization of $$$1111111111$$$ is

$$$\displaystyle 1111111111 = 11 \cdot 41 \cdot 271 \cdot 9091. $$$

So we should solve the problem for each prime $$$p$$$ independently, in $$$\mathbb{Z}_p$$$ (a field), and join the results using the Chinese remainder theorem. Moreover, when working over $$$\mathbb{Z}_p$$$, we should use Fermat's little theorem ($$$a^p \equiv a \mod p$$$ for all $$$a$$$), to transform $$$P$$$ into a polynomial of degree at most $$$p - 1$$$. This is required because large factorials (more specifically $$$p!$$$ and subsequent ones) do not have inverses modulo $$$p$$$, and they would be needed without this transformation.

To use the explicit formula to compute $$$\Delta^{d+1} Pib(0)$$$ and $$$\Delta^{d + 1} Pib(1)$$$, we need the first $$$d + 3$$$ values $$$Pib(0), Pib(1), \dots, Pib(d + 2)$$$. In their turn, calculating these requires the $$$d + 1$$$ values $$$P(2), P(3), \dots, P(d + 3)$$$, which can be computed with standard multipoint evaluation in $$$O(d \log^2 d)$$$. This is the slowest part of the algorithm. In the next part, we will need $$$P(0)$$$ and $$$P(1)$$$, so we might as well evaluate these together.

Now we are left with the sum

$$$\displaystyle \sum_{k=0}^d \Delta^k P(n + 2 + k). $$$

This looks hard, because not only the polynomial is changing but the points at which we evaluate are changing together, and both transforming and evaluating are $$$O(d)$$$. But with Taylor shifts, we can transform this into an evaluation at a single point:

$$$\displaystyle \begin{align*} \sum_{k=0}^d \Delta^k P(n + 2 + k) &= \sum_{k=0}^d \Delta^k (1 + \Delta)^k P(n + 2)\\ &= \left( \sum_{k=0}^d (\Delta + \Delta^2)^k \right)P (n + 2)\\ &= \left( \frac{1}{1 - (\Delta + \Delta^2)} + O(\Delta^{d+1}) \right)P (n + 2)\\ &= \left( \frac{1}{1 - (\Delta + \Delta^2)} \right)P (n + 2). \end{align*} $$$

We are now done, because we've seen how to apply a power series of $$$\Delta$$$ to a polynomial written in the basis of falling factorials, we've seen how to interpolate $$$P$$$ from $$$P(0), P(1), \dots, P(d)$$$ and we already have these values.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +129
  • Проголосовать: не нравится

Автор WeakestTopology, 2 месяца назад, По-английски

Hey all,

First, I'll give a brief overview of what finite calculus is about and I'll then show you how I used it to solve 1951G - Clacking Balls. This may be not the most elegant way to solve that problem, but it requires few observations and I find it interesting enough to share it. I am also taking part in the Month of Blog Posts challenge.

You can find better introductions googling terms "finite calculus", "discrete calculus", etc, although I still have not found an in-depth, thorough, introduction for these ideas, only some short expositions scattered around the web. But these ideas are pervasive throughout competitive programming (think of adjacent differences, prefix sums, slopes and the like).

Disclaimer: this blog requires some mathematics background to fully understand it.

A short intro to finite calculus

If you know calculus, you have seen derivatives, integrals and the fundamental theorem of calculus. For a function $$$f$$$ defined on real numbers, we define it's derivative (if it exists) to be

$$$\displaystyle \frac{d}{dx}f(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}. $$$

For the discrete case, $$$f$$$ defined on integers instead, we define it's (discrete) derivative to be

$$$ \Delta f(x) = f(x + 1) - f(x). $$$

This is exactly the argument in the limit above but with $$$h$$$ set to $$$1$$$. Derivatives are akin to differences and integrals are akin to sums. We have the "fundamental theorem of finite calculus":

$$$ f(b) - f(a) = \sum_{k=a}^{b - 1} \Delta f(k). $$$

It's just a telescoping sum, right? Looks simple and not so useful. We also have a converse statement: if $$$\Delta g = f$$$, then

$$$ g(n) = \sum_{k=1}^{n-1} f(k) + C $$$

for some constant $$$C$$$ (in other words, if $$$\Delta g = \Delta h$$$, then $$$g$$$ and $$$h$$$ differ by a constant). In fact, $$$C = g(1)$$$. Like with standard derivatives, the discrete derivative $$$\Delta$$$ decreases the degree of a polynomial exactly by one: $$$\Delta n^d = (n + 1)^d - n^d$$$, which is a polynomial in $$$n$$$ of degree $$$d - 1$$$.

I guess the first thing that can give you a thrill about finite calculus is that you can use it to show that

$$$\displaystyle S(n) = \sum_{k=1}^n k^d $$$

is a polynomial of degree $$$d + 1$$$. The proof of this follows from the previous facts and the next theorem:

Let $$$\mathcal{P}_d$$$ denote the space of polynomials of degree $$$d$$$. Then $$$\Delta$$$ as a linear transformation from $$$\mathcal{P}_{d + 1} \to \mathcal{P}_d$$$ is surjective.

Proof: If $$$\Delta f = 0$$$, then $$$f$$$ is constant, so $$$\operatorname{dim} \ker \Delta = 1$$$ and $$$\Delta$$$ is surjective by the rank-nullity theorem.

Now $$$\Delta S(n) = (n + 1)^d$$$, which is a polynomial. Using the theorem, we know there exists a polynomial $$$p$$$ such that $$$\Delta p(n) = (n + 1)^d = \Delta S(n)$$$, hence $$$S$$$ is also polynomial because it differs from $$$p$$$ by a constant at most.

One way of finding the coefficients of $$$S(n)$$$ is to manually compute $$$S(0), S(1), \dots, S(d)$$$. This yields a linear system with the Vandermonde matrix, which is known to be invertible. There are other ways of computing it, recursively with $$$d$$$, which you can find through web search. Computer algebra systems are also able to find these formulas and we're going to need them later on.

We can make several analogies with facts that come from standard calculus:

  • If $$$\Delta f = a$$$ is a constant, then is $$$f$$$ linear, e.g. $$$f(n) = an + b$$$,

  • If $$$\Delta f > 0$$$, then $$$f$$$ is strictly increasing,

  • If $$$\Delta^2 f \ge 0$$$, then $$$f$$$ is convex;

and so on.

Clacking balls

Now, please read the problem statement 1951G - Clacking Balls, as it is already abridged enough. Our idea goes as follows:

Considering the gaps between consecutive balls, the process will continue until one of the gaps grows up to size $$$M$$$. If a gap shrinks to size $$$0$$$, it dies and is no longer considered. Let $$$X$$$ denote the random variable representing the number of seconds until the process finishes and $$$A_k$$$ the event that a gap of size $$$k$$$ eventually grows to $$$M$$$. With this in mind, we want to compute:

$$$\displaystyle E(X) = \sum_{k_i} P(A_{k_i})E(X|A_{k_i}), $$$

where the summation is over our set of gap sizes $$${k_1, k_2, \dots, k_n}$$$. As a shortcut, let's write $$$p_k = P(A_k)$$$ and $$$W_k = P(A_k)E(X|A_k)$$$.

The probabilities

This is the easy part. We know that $$$p_m = 1$$$ and $$$p_0 = 0$$$. Intuitively, we also know that $$$p_k$$$ must increase with $$$k$$$.

For a given gap, each second either one of three things happen:

  • Alice chooses the ball on its right end and the gap grows by $$$1$$$,

  • Alice chooses the ball on its left end and the gap shrinks by $$$1$$$,

  • Alice chooses any other ball, and the gap stays the same.

With equations, this means that:

$$$\displaystyle p_k = \frac{1}{n}p_{k-1} + \frac{1}{n}p_{k + 1} + \frac{n-2}{n}p_k, $$$

for $$$0 < k < m$$$. Multiplying by $$$n$$$ and rearranging it, we can rewrite this as $$$\Delta p_{k-1} = \Delta p_k$$$. That is, the derivative is constant and hence $$$p_k$$$ is linear in $$$k$$$:

$$$\displaystyle p_k = \frac{k}{m}. $$$

You can use the fundamental theorem of finite calculus to prove this with more details.

The expectations

With the conditional expectations, the reasoning is similar, but we must be more careful. We know that $$$E(X|A_m) = 0$$$ but note that $$$E(X|A_0)$$$ is undefined (once a gap shrinks to $$$0$$$ it will never grow again). Hence, the other boundary point is $$$E(X|A_1)$$$, which we know nothing about at first.

The reason for the definition of $$$W_k$$$ will become clear now. Assume $$$1 < k < m$$$ and let's use the definition of conditional expectation for discrete random variables:

$$$\displaystyle \begin{align*} E(X|A_k) &= \sum_{x \ge 1} \frac{x P(\{X = x\} \cap A_k)}{P(A_k)}\\ &= \sum_{x \ge 1} \frac{P(\{X = x\} \cap A_k)}{P(A_k)} + \sum_{x \ge 1} \frac{(x - 1) P(\{X = x\} \cap A_k)}{P(A_k)}\\ &= 1 + \sum_{x \ge 1} \frac{(x - 1) P(\{X = x\} \cap A_k)}{P(A_k)} \end{align*} $$$

As before, we can write

$$$\displaystyle P(\{X = x\} \cap A_k) = \frac{1}{n} P(\{X = x - 1\} \cap A_{k + 1}) + \frac{1}{n} P(\{X = x - 1\} \cap A_{k - 1}) + \frac{n - 2}{n} P(\{X = x - 1\} \cap A_k). $$$

Substituting this in the previous equation:

$$$ \begin{align*} E(X|A_k) &= 1 + \frac{1}{n}\frac{P(A_{k+1})}{P(A_k)} E(X|A_{k+1}) + \frac{1}{n}\frac{P(A_{k-1})}{P(A_k)} E(X|A_{k-1}) + \frac{n-2}{n} E(X|A_k). \end{align*} $$$

Multiplying by $$$nP(A_k)$$$ and rearranging it, we get:

$$$\displaystyle \Delta^2 W_{k-1} = \Delta W_k - \Delta W_{k - 1} = -nP(A_k) = -\frac{nk}{m}. $$$

For $$$k = 1$$$, we have

$$$\displaystyle P(\{X = x\} \cap A_1) = \frac{1}{n} P(\{X = x - 1\} \cap A_2) + \frac{n - 2}{n} P(\{X = x - 1\} \cap A_1) $$$

and, similarly,

$$$ E(X|A_1) = 1 + \frac{n-2}{n} E(X|A_1) + \frac{1}{n} \frac{P(A_2)}{P(A_1)} E(X|A_2), $$$

yielding

$$$\displaystyle \Delta W_1 = W_1 - \frac{n}{m}. $$$

We got ourselves a set of equations:

$$$ \begin{align*} \Delta^2 W_k &= -\frac{n(k + 1)}{m},\\ \Delta W_1 &= W_1 - \frac{n}{m},\\ W_m &= 0. \end{align*} $$$

Intuitively (and concretely, see this comment), it looks like a second order differential equation with two initial value conditions which should determine its solution uniquely. Indeed, let's see.

With the fundamental theorem:

$$$ \begin{align*} \Delta W_k &= \Delta W_1 + \sum_{\ell = 1}^{k - 1} \Delta^2 W_\ell\\ &= W_1 - \frac{n}{m} - \sum_{\ell = 1}^{k - 1} \frac{n(\ell + 1)}{m}\\ &= W_1 - \sum_{\ell = 1}^{k} \frac{n\ell}{m}\\ &= W_1 - \frac{n}{m}\frac{k(k + 1)}{2}. \end{align*} $$$

We just reduced the order of our differential equation by 1! Now for $$$W_k$$$ it's easier to start on other side, as we know $$$W_m$$$ but not $$$W_1$$$:

$$$ \begin{align*} W_{m - k} &= W_m - \sum_{\ell = 1}^k \Delta W_{m - \ell}\\ &= -kW_1 + \frac{n}{m} \sum_{\ell = 1}^k \frac{(m - \ell)(m - \ell + 1)}{2}\\ &= -kW_1 + \frac{n}{6m} k(k^2 - 3 k m + 3 m^2 - 1), \end{align*} $$$

where in the last step we used a computer algebra system to evaluate the sum (it involves just a sum of squares and other simpler terms, but no point in writing it all down).

Setting $$$k = m - 1$$$ in the above, we get the value of $$$W_1$$$ and with it we can compute the rest.

Implementation in Python:

from fractions import Fraction

T = int(input())
for _ in range(T):
    N, M = map(int, input().split())    
    A = list(map(int, input().split()))
    A = sorted(A)
    n = Fraction(N, 1)
    m = Fraction(M, 1)
    def R(K):
        k = Fraction(K, 1)
        return n / m * (k * (k * k - 3 * k * m + 3 * m * m - 1)) / 6
    W1 = R(m - 1) / m
    ans = Fraction(0, 1)
    for i in range(N):
        l = (A[(i + 1) % N] - A[i] + M) % M
        k = M - l
        ans += -k * W1 + R(k)
    print(ans)

Submitted C++ implementation: 281371585.

Concluding remarks

I hope you enjoyed reading it!!

Thanks to tfg for recommending me this problem while we were at the WF in Egypt and special thanks to alihew841 for spotting typos in the article.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +453
  • Проголосовать: не нравится