Блог пользователя Elegia

Автор Elegia, история, 9 месяцев назад, По-английски

Hi everyone, today I'd like to elaborate something more on the problem I on the recently ended div 1 contest.

As described in the official solution, a bottleneck of the problem is the following: Given a sequence $$${c_j}_{-n \leq j \leq n}$$$, compute

$$$ g_k = \sum_{-k\leq j\leq k} ([x^j] (x + 1/x)^k) c_j $$$

for all $$$0 \leq k \leq n$$$.

In the official solution, an $$$O(n\log^2 n)$$$ time algorithm is presented, and we also claimed that it is possible to compute $$$g_k$$$ in $$$O(n\log n)$$$ time. In this post, I'd like to elaborate on how to do that.

By the way, another bottleneck of the problem is the relaxed convolution problem, where the usual implementation also takes $$$O(n\log^2 n)$$$ time. I mentioned in comment that the relaxed convolution has a somehow practical solution that has time complexity

$$$ O\left(\frac{n\log^2 n}{\log \log n}\right), $$$

and there are more intricated optimization that have theoretical complexity $$$O(n(\log n)^{1+o(1)})$$$, we have theoretically faster algorithms for problem I.

Well, let's now focus on the previous problem. If you know that methodology called transposition principle (aka. Tellegen's principle), you will immediately see that the problem is equivalent to the following: Given a polynomial $$$A(x)$$$ of degree $$$n$$$, compute the coefficients of $$$A(x + 1/x)$$$.

Well. If you don't know that, hope there will be someone writing a blog post about that. And just forget the previous problem, and focus on the problem of computing the coefficients of $$$A(x + 1/x)$$$. :)

So here comes our protagonist of this post: the composition problem.

Composition problem

The problem is described as follows. Given a polynomial $$$A(x)$$$ of degree $$$n$$$, and a rational function $$$P(x)/Q(x)$$$, where $$$P, Q$$$ has a constant degree, compute the coefficients of the polynomial $$$A(P(x)/Q(x))$$$. Here we mean that, since $$$Q(x)^n A(P(x)/Q(x))$$$ must be a polynomial, the goal is to compute its coefficients.

Another intuitive interpretation is that, $$$A(P(x)/Q(x))$$$ is a rational function, and we want to compute the coefficients of its numerator. This naturally leads to the following general algorithm: Consider a divide-and-conquer algorithm, let $$$A(x) = A_0(x) + x^{n/2} A_1(x)$$$, we compute $$$A_0(P/Q)$$$ and $$$A_1(P/Q)$$$ separately, and merge them via

$$$ A(P/Q) = A_0(P/Q) + (P/Q)^{n/2}A_1(P/Q). $$$

The time complexity of this algorithm is given by the recurrence relation $$$T(n) = 2T(n/2) + O(n\log n)$$$, which has solution $$$O(n\log^2 n)$$$.

Several years ago I started to think about this problem. Unfortunately, I don't know how to do better than this in general. However, I do discover some special cases that have $$$O(n\log n)$$$ algorithms, where indeed handle the case $$$x+1/x = (1 + x^2)/x$$$.

You might suspect that there is some magic theory behind this, but actually I don't have. The only thing I did was to find some algebraic tricks, and applying them just works.

Building Blocks

We first start with several building blocks that are useful for the composition problem.

The first case is $$$A(x+c)$$$. Actually this is almost the only nontrivial tool we can use to speed up the composition problem. I guess most of the audience already knows how to do this in $$$O(n\log n)$$$, but for completeness I will still present the idea here. Just look at the expansion

$$$ b_k = \sum_j a_j \binom j k = \frac 1{k!}\sum_j a_j j! \cdot \frac{c^{j-k}}{(j-k)!}, $$$

which is a convolution.

The second case is $$$A(x^k)$$$. This only requires $$$O(n)$$$ time, since it essentially needs no arithmetic operations.

The third case is $$$A(\alpha x)$$$, where $$$\alpha$$$ is a constant. This can be done by scaling the coefficients of $$$A(x)$$$, which also takes $$$O(n)$$$ time.

Basic Tricks

With the three building blocks above, we can already handle something that seems to be much more.

The first problem is $$$A(x^2 + ax + b)$$$, i.e., the composition of $$$A(x)$$$ with a quadratic polynomial. This can be done by an algebraic trick:

$$$ x^2 + ax + b = \left(x + \frac a 2\right)^2 + \left(b - \frac{a^2}{4}\right). $$$

What does this mean? This means that we can compute $$$A(x^2 + ax + b)$$$ by computing the compositions

$$$ A(x^2 + ax + b) = A(x) \circ \left(x +\left( b - \frac{a^2}{4}\right)\right) \circ x^2 \circ \left(x + \frac a 2\right) $$$

in the order from left to right.

The second problem is $$$A((ax+b)/(cx+d))$$$, i.e., the Mobius transformation. Just look at

$$$ \frac{ax+b}{cx+d} = \frac ac + \frac{bc-ad}{c(cx+d)}, $$$

and we can compute $$$A((ax+b)/(cx+d))$$$ by computing the compositions

$$$ A\left(\frac{ax+b}{cx+d}\right) = A(x) \circ \left(x + \frac ac\right) \circ \frac{bc-ad}{c}x^{-1} \circ (cx+d). $$$

Well, there are some technical details on how to compute the compositions after composing $$$x^{-1}$$$, but these are just tedious and I think you clever readers can figure them out by yourself. :)

The Trick for $$$A(x + 1/x)$$$

Now we are going to solve $$$A(x + 1/x)$$$, and here is the trick:

$$$ x + 1/x = 2\frac{x+1}{x-1} \circ x^2 \circ \frac{x+1}{x-1}. $$$

Using this special case, we actually can handle general $$$A(P(x)/Q(x))$$$ in $$$O(n\log n)$$$ time, where $$$P, Q$$$ are quadratic polynomials.

For $$$(ax^2 + bx + c) / (dx^2 + ex + f)$$$, we can first move out a constant term, then reduce to the cases $$$(b' x + c') / (dx^2 + ex + f)$$$.

Then substitute by $$$x - c'/b'$$$, we reduce to the case $$$b'x / (d'x^2 + e'x + f')$$$.

Then take reciprocal, we reduce to the case $$$(d'x^2 + e'x + f') / b'x = (d'x^2 + f') / b'x + e'/b'$$$.

Seems that $$$d'x/b' + f'x^{-1}/b'$$$ is different with $$$x+x^{-1}$$$, but after scaling, we can again reduce to the case $$$x+x^{-1}$$$.

Well, for some special cases, we cannot do division, but then it can also be reduced to other cases, such as composing a quadratic polynomial, which is also already settled.

(Acknowledgement: This blog (Chinese) reminded me that it's possible to settle all $$$(ax^2 + bx + c) / (dx^2 + ex + f)$$$).

The Trick for $$$A(x^3 + ax^2 + bx + c)$$$

Actually, we can also handle $$$A(x^3 + ax^2 + bx + c)$$$ in $$$O(n\log n)$$$ time.

The first two tricks are just applying $$$x+c$$$, then we can figure out that the only thing we need to handle is to compute $$$A(x^3 + cx)$$$. Since we can also do scaling, we only need to solve some specific $$$c$$$.

The game ends with the following trick:

$$$ (x^3 - 3x) \circ \frac{x+x^{-1}}{2} = \frac{x^3+x^{-3}}{2}. $$$

This means that

$$$ x^3-3x = \frac{x+x^{-1}}{2} \circ x^3 \circ \left(\frac{x+x^{-1}}{2}\right)^{\langle -1\rangle}. $$$

You may ask, how can I composite $$$\left(\frac{x+x^{-1}}{2}\right)^{\langle -1\rangle}$$$? Well, the problem is that it will occur the composition $$$x^{1/2}$$$ when you expand the composition chain of $$$x+x^{-1}$$$, but at that time, it can be proved that all terms of the polynomial have the form $$$x^{2k}$$$, then the composition $$$x^{1/2}$$$ has clear meanings.

Well, there are many technical details that I omitted, for example, it will occur in some essential cases that we need to do scaling with a parameter is a square root of something. This can be handled by introducing a formal element $$$R = \sqrt \alpha$$$, and there might be some other technical details that I omitted.

Whatever, the long chain of composition and the technical details make the $$$O(n\log n)$$$ algorithm not so practical according to my experimental test, but I think it is still an interesting result.

Can we go further?

Is it possible to solve the composition problem in $$$O(n\log n)$$$ time for general rational functions? At least for the current techniques we have, the answer is no.

Here is an argument. What we have done before is to reduce the composition problem to the composition chain with the Mobius transform $$$(ax+b)/(cx+d)$$$, and $$$x^k$$$, and $$$x^{1/k}$$$. Note that if we want to solve the equation $$$P(x) = 0$$$. Suppose $$$P(x)$$$ has such a composition chain, then we can solve the equation $$$P(x) = 0$$$ by solving the composition chain step by step, this will result in an expression of the roots of $$$P(x)$$$ in radicals. However, by the celebrated Abel–Ruffini theorem, we know that there are polynomials (actually general polynomials of degree $$$\geq 5$$$) that cannot be solved by radicals, and this means that these polynomials definitely cannot be represented by the composition chain we want!

So there might be some interesting questions: Try to find some characterization of the polynomials that can be represented by the composition chain. This seems to let me recall the Galois theory, or a characterization of the composition chain of a polynomial by polynomials, known as the "Ritt decomposition theorem". However, they seem to not be directly related to the composition chain we are talking about, I wonder if there might or might not be some well-developed mathematical theory behind this problem.

Another question is, can we find some other building blocks that can be used to solve the composition problem, thus extending our ability to solve the composition problem for more cases? Or more ambitiously, can we find an entirely different algorithm that can solve the composition problem in $$$O(n\log n)$$$ time for general rational functions?

I'll be happy to see if any of these questions have positive or negative answers.

Anyway, have a nice day!

Полный текст и комментарии »

Обсуждение think-cell Round 1
  • Проголосовать: нравится
  • +269
  • Проголосовать: не нравится

Автор Elegia, история, 3 года назад, По-английски

This is a part of my report on China Team Selection 2021. I might translate some other parts later if you are interested.

I'm going to show that we can calculate $$$f(G)$$$ for any polynomial $$$f\in R[x]$$$ and set power series $$$G: 2^{n} \rightarrow R$$$ under $$$\Theta(n^2 2^n)$$$ operations on $$$R$$$.

Let's first consider a special case: calculate $$$F = \exp G$$$. We consider set power series as truncated multivariate polynomial $$$R[x_1,\dots,x_n]/(x_1^2,\dots,x_n^2)$$$. We can see that $$$F' = FG'$$$ whichever partial difference $$$\frac{\partial}{\partial x_k}$$$ we take. Then we can rewrite this equation as $$$[x_n^1] F = ([x_n^1]G)\cdot [x_n^0]F$$$. Hence we reduced the problem into calculating $$$[x_n^0] F$$$, and then one subset convolution gives the rest part. The time complexity is $$$T(n)=\Theta(n^22^n) + T(n-1)$$$, which is exactly $$$\Theta(n^22^n)$$$. I call this method "Point-wise Newton Iteration".

This method sounds more reasonable than calculating the $$$\exp$$$ on each rank vector. Because $$$\frac{G^k}{k!}$$$ counts all "partitions" with $$$k$$$ nonempty sets in $$$G$$$. So it exists whenever $$$1\sim n$$$ is invertible in $$$R$$$.

Now we try to apply this kind of iteration into arbitrary polynomial $$$f$$$. Let $$$F=f(G)$$$, then $$$F'=f'(G)G'$$$. Then we reduced the problem into calculating $$$[x_n^0]f(G)$$$ and $$$[x_n^0]f'(G)$$$. This becomes two problems! But don't worry. In the second round of recursion, it becomes $$$[x_{n-1}^0 x_n^0]$$$ of $$$f(G), f'(G), f"(G)$$$. The number of problem grows linearly. You will find out that the complexity is $$$\sum_{0\le k\le n} (n-k) \cdot \Theta(k^2 2^k)$$$.

It's not hard to see that the complexity is still $$$\Theta(n^2 2^n)$$$, because $$$\sum_{k\ge 0} k\cdot 2^{-k}$$$ converges.

Noted that $$$\frac{(A+B)^2}2-\frac{A^2}2-\frac{B^2}2=AB$$$, we proved the equivalence between the composition and subset convolution.

Here are some remarks:

  • If $$$G$$$ has no constant term, $$$f$$$ is better to be comprehended as an EGF.
  • This algorithm holds the same complexity on calculating $$$f(A, B)$$$ and so on.
  • We can optimize the constant of this algorithm by always using the rank vectors during the iteration. Then it can beat lots of methods that work for some specialized $$$f$$$.
  • If you have some knowledge of the "Transposition Principle", you will find out that, for any fixed $$$F, G: 2^n \rightarrow R$$$, the transposed algorithm gives $$$\sum_S F_S \cdot (G^k)_S$$$ for all $$$k$$$. It immediately gives an $$$\Theta(n^22^n)$$$ algorithm to calculate the whole chromatic polynomial of a graph with $$$n$$$ vertices.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +312
  • Проголосовать: не нравится

Автор Elegia, 4 года назад, По-английски

The China team selection ended today, the top 4 are:

Yu Haoxiang (yhx-12243),
Deng Mingyang (Rewinding),
Qian Yi (skip2004) and
Dai Chenxin (gtrhetr).

They will participate in IOI2021. Good luck!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +692
  • Проголосовать: не нравится

Автор Elegia, история, 4 года назад, По-английски

We will get the bivariate generating function $$$\widehat F(z, t) = \sum_{n\ge 1}\sum_{k=1}^n A_{n,k}\frac{z^nt^k}{n!}$$$, where $$$A_{n,k}$$$ is the sum of occurrences of $$$k$$$ in all good sequence of length $$$n$$$. Consider the inclusion-exclution principle. For all contribution with $$$\max k = m$$$, we have

$$$ \sum_{j=1}^m \sum_{S \subseteq \{1, 2, \cdots m-1\}} (-1)^{|S|} Q(S, z, t, j) $$$

This means that for all $$$i \in S$$$, we force all recurrences of $$$i$$$ are before $$$i + 1$$$, and the occurrences of $$$j$$$ are counted.

After some tough calculation, we will found out that this equates to $$$\frac{t(\mathrm e^{z(1-t)}-1)}{(1-z) (1-t \mathrm e^{z(1-t)})}$$$. Here are the details:

Details

Now our goal is to calculate

$$$ [z^n]\frac{t(\mathrm e^{z(1-t)}-1)}{(1-z) (1-t \mathrm e^{z(1-t)})} $$$

We consider $$$\left([z^n]\frac{t(\mathrm e^{z(1-t)}-1)}{(1-z) (1-t \mathrm e^{z(1-t)})}\right) + 1 = (1-t)[z^n] \frac 1{(1-z)(1-t\mathrm e^{z(1-t)})}$$$, which looks simpler. We let $$$z = \frac u{1-t}$$$, then

$$$ [z^n]\frac 1{(1-z)(1-t\mathrm e^{z(1-t)})} = (1-t)^n[u^n] \frac1{(1-\frac u{1-t})(1-t\mathrm{e}^u)} $$$

Hence we have

$$$ \begin{aligned} (1-t)[z^n] \frac 1{(1-z)(1-t\mathrm e^{z(1-t)})} &= [u^n]\frac{(1-t)^{n+2}}{(1-\frac{t}{1-u})(1-t\mathrm e^u)(1-u)}\\&= (1-t)^{n+2} [u^n] \left(\frac{-\mathrm e^u}{\left(\mathrm e^u u-\mathrm e^u+1\right) \left(1-t \mathrm e^u\right)}+\frac{\frac{1}{1-u}}{\left(\mathrm e^u u-\mathrm e^u+1\right) (1-\frac{t}{1-u})}\right)\end{aligned} $$$

Noticed that $$$[u^m]\mathrm{e}^{ku} = \frac{k^m}{m!}$$$, this could be calculated straightly through multipoint evaluation with time complexity of $$$\Theta(n\log^2 n)$$$. And $$$[u^m] \frac1{(1-u)^k} = \binom{n+k-1}{n} = \frac{(n+k-1)!}{n!(k-1)!}$$$ so this part could be calculated through a convolution. It will pass if your implementation doesn't have a big constant.

It could also be improved to $$$\Theta(n\log n)$$$ through the Lagrange Inversion formula similar to the original solution, I leave this as an exercise.

UPD1: simplified some deduction.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +172
  • Проголосовать: не нравится