On today's POI training camp round I have learnt a nice technique that could possibly be useful in some number theory problems. I couldn't find any CF article on it, so I think it's fair enough to share it on my own.
Remark on used notation
In some sums I will use an Iverson notation
Problem: Squarefree Function
Let's define a Squarefree Function $$$b(x)$$$ for any positive integer $$$x$$$ as $$$x$$$ divided by a greatest perfect square, that divides $$$x$$$.
For example: $$$b(1) = 1$$$, $$$b(2) = 2$$$, $$$b(4) = 1$$$, $$$b(6) = 6$$$, $$$b(27) = 3$$$, $$$b(54) = 6$$$, $$$b(800) = 2$$$
Given an array $$$a$$$ of $$$n \leq 10^5$$$ positive integers, where each $$$a_i \leq 10^5$$$ compute sum
\begin{gather} \sum\limits_{1 \leq i < j \leq n}b(a_i\cdot a_j) \end{gather}
Technique: GCD Convolution
You might probably heard about a Sum Convolution. For two arrays $$$b$$$, and $$$c$$$, it is defined as an array $$$d$$$ such that \begin{gather} d_k = \sum\limits_{i,j}b_i\cdot c_j[i + j = k] \end{gather} If not, it's basically the same thing as a polynomial multiplication. If $$$B(x) = b_0 + b_1x + b_2x^2 + ... + b_nx^n$$$, and $$$C(x) = c_0 + c_1x + c_2x^2 + ... + c_nx^n$$$, then $$$(B \cdot C)(x) = d_0 + d_1x + d_2x^2 + ... + d_{2n}x^{2n}$$$
Let's define GCD Convolution by analogy
Definition
A GCD Convolution of two arrays $$$b$$$, and $$$c$$$, consisting of positive integers, is an array $$$d$$$ such that \begin{gather} d_k = \sum\limits_{i, j}b_i\cdot c_j[gcd(i,j) = k] \end{gather}
Algorithm to find GCD Convolution
Of course, we can compute it naively by iterating over all pairs of indicies. If $$$b$$$ and $$$c$$$ consists of $$$n$$$ elements then the overall complexity would be $$$O(n^2log(max(b) + max(c)))$$$. But it turns out, that we can do better.
Let's look at the sum $$$d_k$$$ values, with indicies divisible by some integer $$$g$$$, so that is $$$k = gm$$$ for some integer m. \begin{gather} \sum\limits_{m=1}^{n/g}d_{gm} = \sum\limits_{m=1}^{n/g}\sum\limits_{i,j}b_i\cdot c_j[gcd(i,j) = gm] = \sum\limits_{i,j}b_i\cdot c_j\left[g \;| \;gcd(i,j)\right] \end{gather}
From the definition of gcd, we know that $$$g | gcd(i,j) \Leftrightarrow g | i \wedge g | j$$$ \begin{gather} \sum\limits_{i,j}b_i\cdot c_j[g \;|\; gcd(i,j)] = \sum\limits_{i,j}b_i\cdot c_j[g \;|\; i][g \;|\; j] = \end{gather} \begin{gather} =\sum\limits_{i,j}\left(b_i[g \;|\; i]\right)\cdot \left(c_j[g \;|\; j]\right) = \left(\sum\limits_{g|i}b_i\right)\left(\sum\limits_{g|j}c_j\right) \end{gather}
We can define $$$B_g = \sum_{m=1}^{n/g}b_{gm}$$$, and $$$C_g = \sum_{m=1}^{n/g}c_{gm}$$$, and $$$D_g = \sum_{m=1}^{n/g}d_{gm}$$$. From above equation one could easily derive $$$D_g = B_g\cdot C_g$$$. Knowing that $$$O(n + \frac{n}{2} + \frac{n}{3} + ...) = O(n\log n)$$$, arrays $$$B$$$ and $$$C$$$ can be computed directly from their definitions in $$$O(n\log n)$$$.
Recovering a $$$d_k$$$ values from D array is simple. All we need is just subtract all the summands of $$$D_i$$$ except from the smallest. So, formally, we have \begin{gather} d_k = D_k - \sum\limits_{m=2}^{n/k}d_{km} \end{gather} Which can be computed using dynamic programming, starting from $$$k = n$$$.
So, the overall complexity of computing a GCD Convolution of two arrays of size $$$n$$$ is $$$O(n\log n)$$$.