Thank you for joining the contest 😊.
A. SUNNY
bijections
dynamic-programming
Try to find a bijection, meaning try to find another problem such that its solution is equivalent to the original problem but is easier to solve.
Try to notice what happens to the difference array of $$$a$$$ when Sunny plays a note.
Try to use dynamic programming to check if an answer exists.
Let $$$da_i = a_{i+1} - a_i, db_i = b_{i+1} - b_i, d_i = db_i - da_i$$$ for all $$$i$$$ $$$(1 \le i < n)$$$
When Sunny plays a note on string $$$k$$$, six things can happen to $$$da$$$ depending on the value of $$$k$$$ :
$$$k < a_1$$$, in which case all the elements of the array get incremented by $$$1$$$, and $$$da$$$ is not affected.
$$$k > a_n$$$, in which case all the elements of the array get decreased by $$$1$$$, and $$$da$$$ is not affected.
$$$k = a_1$$$, in which case all the elements of the array get increased by $$$1$$$ except $$$a_1$$$, so only $$$da_1$$$ is increased by $$$1$$$.
$$$k = a_n$$$, in which case all the elements of the array get decreased by $$$1$$$ except $$$a_n$$$, so only $$$da_{n-1}$$$ is increased by $$$1$$$.
$$$k = a_i$$$ for some $$$i$$$ $$$(1 < i < n)$$$, in which case both $$$da_{i-1} , da_i$$$ get increased by $$$1$$$.
$$$a_i < k < a_{i+1}$$$ for some $$$i$$$ $$$(1 \le i < n)$$$, in which case both $$$da_i$$$ get increased by $$$2$$$, but this operation can only be applied if there exists an integer between $$$a_i, a_{i+1}$$$ as $$$k$$$ has to be an integer (we can increase the value of $$$da_i$$$ by $$$2$$$ only if $$$da_i > 1$$$).
From operations $$$(1) , (2)$$$, we can change the value of $$$a_1$$$ as much as we like without affecting the difference array of $$$a$$$, so we just need to check if we can convert the differnce array $$$da$$$ to the difference array $$$db$$$ to be able to convert $$$a$$$ into $$$b$$$.
First, notice that if $$$d_i < 0$$$ for some $$$i$$$, the answer is always NO as the value of $$$da_i$$$ can only be increased.
Notice that we don't need to apply operation $$$(5)$$$ more than $$$2$$$ times on the same index $$$i$$$, the reason is that we may only need to apply it if one of $$$da_{i-1}, da_{i}$$$ is equal to $$$1$$$ (meaning that we can't apply opreation $$$(6)$$$ on one of them), and if we apply it once and are not satisfied with the parity of $$$da_{i-1}, da_{i}$$$, we can apply it one more time, but other than that would be useless, and can be replaced with operations of type $$$(6)$$$.
We can use a DP solution to check if it is possible to convert $$$da$$$ to $$$db$$$, where $$$dp_{i,x}$$$ is equal to $$$1$$$ if it is possible to convert the suffix $$$[i,n-1]$$$ of $$$da$$$ to the corresponding suffix in $$$db$$$, if $$$x$$$ operations of type $$$(5)$$$ where applied before on $$$da_{i-1}, da_{i}$$$, and is equal to $$$0$$$ otherwise. The answer to the problem is YES if $$$dp_{1,0} = 1$$$ and NO otherwise.
We iterate on the number of operations $$$j$$$ of type $$$(5)$$$ to perform on $$$da_i, da_{i+1}$$$, and we need to check if $$$d_i - x - j$$$ is non-negative (as $$$da_i \le db_i$$$ must hold), and even (as operation $$$(6)$$$ can only decrease $$$2$$$ from $$$d_i$$$), if it is so then $$$dp_{i,x}$$$ is equal to $$$1$$$ if $$$dp_{i+1,j} = 1$$$.
But one corner case to consider is when $$$da_i = 1, d_i > 0, x + j > 0$$$, in which case, we did not use any operation of type $$$(5)$$$ to add anything on $$$da_i$$$, and we can't use operations of type $$$(6)$$$ as $$$da_i = 1$$$, but we need to increase $$$da_i$$$ as $$$d_i > 0$$$, so this case is not a possible case, and should be skipped.
The remaining details of the DP are trivial and can be viewed in the "Code" section below (like $$$d_1, d_{n-1}$$$ don't need the condition to be even and $$$da_1, da_{n-1}$$$ don't need the condition to be bigger than $$$1$$$ because of operations $$$(3) , (4)$$$).
$$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#pragma GCC optimize("Ofast","unroll-loops")
#pragma GCC target("popcnt")
const int SZ=2e5+7;
int n,a[SZ],b[SZ],d[SZ],da[SZ],dp[SZ][3];
int solve(int i,int x){
if(i==n-1)return 1;
int&ret=dp[i][x];
if(ret!=-1)return ret;
ret=0;
for(int j=0;j<=2;j++){
int X=d[i]-x-j;
if(X<0)continue;
if(i==0||i==n-2){
ret|=solve(i+1,j);
continue;
}
if(da[i]==1&&x+j==0&&d[i]>0)continue;
if(X&1)continue;
ret|=solve(i+1,j);
}
return ret;
}
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> n;
for(int i=0;i<n;i++)cin >> a[i];
for(int i=0;i<n;i++)cin >> b[i];
for(int i=0;i<n-1;i++){
da[i]=a[i+1]-a[i];
d[i]=(b[i+1]-b[i])-da[i];
if(d[i]<0){
cout << "NO";
return 0;
}
}
memset(dp,-1,sizeof(dp));
if(solve(0,0))cout << "YES";
else cout << "NO";
return 0;
}
In the case of "YES", can you find the minimum number of operations required to convert $$$a$$$ into $$$b$$$?
B. AUBREY
linear-algebra
combinatorics
dynamic-programming
trees
Can you find the summation of all possible values of subset-XORs? (not necessirally distinct values, just all $$$2^k$$$ subsets where $$$k$$$ is the size of the original set)
Can you even find the summation of all possible distinct values of subset-XORs? if not, maybe try to learn something called XOR-basis.
First of all, using XOR-basis as a blackbox, the summation of all possible distinct values of subset-XORs of a set $$$S$$$ is equal to the summation of all possible subset-XORs of $$$\text{basis}(S)$$$, this is from the properties of XOR-basis.
To find the summation of all possible subset XORs, we count the contribution of every bit. Suppose bit $$$i$$$ appeared in $$$Y$$$ masks, and did not appear in $$$Z$$$ masks $$$(Y + Z = n)$$$, we can choose any of the $$$Z$$$ masks as it does not affect the XOR of bit $$$i$$$. For the $$$Y$$$ masks we need to choose $$$x$$$ of them such that $$$x$$$ is an odd number, the number of ways to choose an odd numbers of objects from $$$m$$$ objects is $$$2^{m-1}$$$, making the answer equal to $$$\text{(value of bit)} \times 2^{Y-1} \times 2^Z = \text{(value of bit)} \times 2^{n-1}$$$ (this only works if bit $$$i$$$ did appear at least once $$$(Y \ge 1)$$$).
Since the contribution of each bit is a constant $$$(2^{n-1})$$$, the answer is the summation of bits that did appear at least once multiplied by $$$2^{\text{size} - 1}$$$, in case of a set $$$S$$$ the answer for the summation of distinct values of subset-XORs is equal to $$$2^{|\text{basis}(S)|-1} \times \text{OR}(\text{basis}(S))$$$, where $$$\text{OR}(H)$$$ is the accumulative bitwise-OR of the elements of $$$H$$$.
The rest of the problem is just intermediate rerooting DP. Now, let $$$\text{down}_i, \text{up}_i$$$ be the set of values of the nodes in subtree of node $$$i$$$ and the set of values of the nodes not in subtree of node $$$i$$$ respectively. let $$$D_i = \text{basis}(\text{down}_i)$$$, $$$U_i = \text{basis}(\text{up}_i)$$$, the sizes of $$$D_i, U_i$$$ can not exceed $$$\log_2(\max_A)$$$ (this is from the properties of XOR-basis). So, it is completely fine to calculate and store them.
Now, to find the answer of a normal subtree, we can find $$$D_i$$$ recursively by using the fact that $$$\text{basis}(A \cup B) = \text{basis}(\text{basis}(A) \cup \text{basis}(B))$$$ (where $$$A,B$$$ are two sets), by calculating the $$$\text{basis}$$$ of the union of the $$$\text{basises}$$$ of the subtrees of each child, then adding the value of node $$$i$$$ to the $$$\text{basis}$$$.
Now, to find the answer of an up-subtree, we can find $$$U_i$$$ simiraly, it is equal to the $$$\text{basis}$$$ of $$$U_{p_i}$$$ (where $$$p_i$$$ is the parent of node $$$i$$$), and $$$D_j$$$ for every child $$$j$$$ of $$$p_i$$$ except node $$$i$$$ itself, to exclude node $$$i$$$ we can use prefix and suffix precalculation of the $$$\text{basises}$$$ of the children of node $$$p_i$$$, and choose the $$$\text{basises}$$$ of the prefix before $$$i$$$ and the suffix after $$$i$$$.
After calculating the basis, the answer of the subtree of node $$$i$$$ is equal to $$$A_i = 2^{|D_i|-1} \times \text{OR}(D_i)$$$, and the answer of the up-subtree of node $$$i$$$ is equal to $$$B_i = 2^{|U_i|-1} \times \text{OR}(U_i)$$$.
The value of $$$f(i)$$$ when the tree is rooted at $$$1$$$ is equal to $$$A_i$$$, so we can initially put $$$\text{ans}_1 = \displaystyle\sum_{i=1}^n A_i$$$. Now consider a node $$$u$$$, all subtrees of each node $$$i$$$ are exactly the same whenever the tree is rooted at node $$$u$$$ or node $$$p_u$$$ except for $$$i = u$$$ and $$$i = p_u$$$, when going from root $$$p_u$$$ to root $$$u$$$, node $$$p_u$$$'s subtree gets changed from the full tree (with answer $$$A_1$$$) to the up-subtree of its child $$$u$$$ (with answer $$$B_u$$$), and node $$$u$$$'s subtree gets changed from its normal subtree (with answer $$$A_u$$$) to the full tree (with answer $$$A_1$$$), and all other nodes' subtrees remain te same. We can conclude that $$$\text{ans}_i = \text{ans}_{p_i} + B_i - A_1 + A_1 - A_i = \text{ans}_{p_i} + B_i - A_i$$$, so we can calculate $$$\text{ans}_i$$$ for every $$$i$$$ recursively by a single $$$\text{dfs}$$$.
$$$O(n \log^2(\max_A))$$$.
#include <bits/stdc++.h>
#define sz(x) int(x.size())
#define pb push_back
using namespace std;
typedef long long ll;
const int SZ=2e5+7,MOD=998244353;
int add(int x,int y){
x+=y;
if(x>=MOD)x-=MOD;
return x;
}
int sub(int x,int y){
x-=y;
if(x<0)x+=MOD;
return x;
}
int mul(int x,int y){
return(x*1ll*y)%MOD;
}
struct basis{
int b[30]{},O=0,N=0;
int Sum(){
return mul(O,(1<<N)>>1);
}
void clr(){
for(int i=0;i<30;i++)b[i]=0;
O=0,N=0;
}
void ins(int x){
for(int i=0;i<30;i++){
if(!((x>>i)&1))continue;
if(!b[i]){
b[i]=x,N++,O|=x;
return;
}
x^=b[i];
}
}
void ins(basis&x){
for(int i=0;i<30;i++){
if(x.b[i])ins(x.b[i]);
}
}
};
int n,a[SZ],p[SZ],id[SZ],up[SZ],down[SZ],ans[SZ];
vector<int>g[SZ];
basis bd[SZ],bu[SZ];
vector<basis>pf[SZ],sf[SZ];
void dfs(int nd,int pr=-1){
int N=sz(g[nd]);
for(int i=0;i<N;i++){
if(g[nd][i]==pr)swap(g[nd][i],g[nd].back());
}
if(pr!=-1)g[nd].pop_back(),N--;
p[nd]=pr,bd[nd].ins(a[nd]);
pf[nd].resize(N),sf[nd].resize(N);
basis T;
for(int i=0;i<N;i++){
int ch=g[nd][i];
id[ch]=i;
dfs(ch,nd);
T.ins(bd[ch]);
pf[nd][i]=T;
}
bd[nd].ins(T);
T.clr();
for(int i=N-1;i>=0;i--){
int ch=g[nd][i];
T.ins(bd[ch]);
sf[nd][i]=T;
}
down[nd]=bd[nd].Sum();
}
void calcup(int nd){
if(up[nd]!=-1)return;
if(p[nd]!=-1){
int i=id[nd],pr=p[nd],N=sz(g[pr]);
calcup(pr),bu[nd].ins(bu[pr]),bu[nd].ins(a[pr]);
if(i)bu[nd].ins(pf[pr][i-1]);
if(i+1<N)bu[nd].ins(sf[pr][i+1]);
}
up[nd]=bu[nd].Sum();
}
void calc(int nd,int pr=-1){
if(pr!=-1)ans[nd]=add(ans[pr],sub(up[nd],down[nd]));
for(auto&i:g[nd]){
if(i==pr)continue;
calc(i,nd);
}
}
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> n;
for(int i=0;i<n-1;i++){
int x,y;cin >> x >> y;
g[x].pb(y),g[y].pb(x);
}
for(int i=1;i<=n;i++)cin >> a[i],up[i]=-1;
dfs(1);
for(int i=1;i<=n;i++)calcup(i),ans[1]=add(ans[1],down[i]);
calc(1);
for(int i=1;i<=n;i++){
cout << ans[i] << ' ';
}
}
C. HERO
number-theory
data-structures
matrices
Try to find the answer for the query if the query asked for $$$\text{LCM}(a_{l_j}, a_{{l_j}+1}, ..., a_{r_j})$$$ instead, with $$$n, Q \le 10^3$$$.
Try to calculate the contribution of every prime in the $$$\text{LCM}$$$ function.
Try to find the answer for the query if the query asked for $$$\displaystyle\sum_{L=l_j}^{r_j}\text{LCM}(a_L, a_{L+1}, ..., a_{r_j}) \times L \times r_j$$$ instead.
Try to solve the queries offline using a data structure, like a segment tree.
Is there a way to store the summation of historic values at each leaf $$$i$$$? (Historic values mean all past values that were assigned to leaf $$$i$$$ after each update).
Try to use matrices.
First, the $$$\text{LCM}$$$ of a set of numbers can be calculated by calcuating the contribution of every prime $$$p$$$ that appeared at least once in the prime factorization of one of the numbers in this set, the contriubtion of prime $$$p$$$ is equal to $$$p^{e_p}$$$ where $$$e_p$$$ is the maximum value of the exponent of prime $$$p$$$ over all its exponents in each number in the set, the value of the $$$\text{LCM}$$$ is equal to the product of the contribution of every prime $$$p$$$ that appeared in the set.
We can precalculate the smallest prime factor of every number from $$$1$$$ to $$$10^6$$$ in approximately $$$10^6 \cdot \log \log 10^6 \approx 4.3 \times 10^6$$$ iterations. To find the prime factorization of a number $$$n$$$ for example, we take its smallest prime factor $$$\text{spf}_n$$$, add it to the prime factorization, then divide $$$n$$$ by $$$\text{spf}_n$$$; and repeat this until $$$n$$$ equals $$$1$$$.
Now, to solve $$$\displaystyle\sum_{L=l_j}^{r_j}\text{LCM}(a_L, a_{L+1}, ..., a_{r_j}) \times L \times r_j$$$, we need to sweepline on the value $$$r$$$ from $$$1$$$ to $$$n$$$ and solve queries $$$j$$$ $$$(1 \le j \le Q)$$$ such that $$$r_j = r$$$. We want to maintain an array such that it's $$$i$$$-th element has a value equal to $$$\text{LCM}(a_i, a_{i+1}, ..., a_r) \times i \times r$$$ while sweeplining on the value $$$r$$$.
We can use a segment tree where initially, every leaf $$$i$$$ stores the value $$$i$$$ (The parameters inside the $$$\text{LCM}$$$ function are empty). When we go from $$$r-1$$$ to $$$r$$$, and add element $$$a_r$$$, we will update the contribution of every prime number $$$p$$$ in the prime factorization of $$$a_r$$$. Suppose we are currently updating prime $$$p$$$ having exponent value $$$e$$$ in $$$a_r$$$, we should increase every contribution of prime $$$p$$$ in every leaf $$$i$$$ from its current contribution $$$p^x$$$ to $$$p^{\max(e,x)}$$$. We can store information about the exponents of every prime $$$p$$$ in something like a monotonic stack that carries two values per element: $$$l, v$$$, which means that for every $$$i$$$ such that $$$(1 \le i \le l)$$$, the maximum exponent of the prime $$$p$$$ over each element in the subarray $$$[i,r]$$$ (or in the suffix starting at $$$i$$$ if we assume the current value of $$$r$$$ is the end of the array) has value at least equal to $$$v$$$. So, when maximizing all the suffixes with value $$$x$$$, we keep popping from the monotonic stack as long as the $$$v < x$$$, and as we want to maximize elements with value $$$v$$$ to equal $$$x$$$, we multiply the range $$$[\text{Previous }l + 1, \text{Current }l]$$$ with value $$$p^{x-v}$$$ using a technique called lazy propagation, so the contribution of prime $$$p$$$ in this range has been corrected. (For more understanding about how the monotonic stack works see the code, I am not really good at explaining these details).
Now that we can maintain the value of $$$\text{LCM}$$$ of every suffix while adding elements to the end of the array (by sweepling on $$$r$$$), we can calculate $$$\text{LCM}(a_i, a_{i+1}, ..., a_r) \times i \times r$$$ by taking the sum of values at the leafs from $$$i$$$ to $$$r$$$ using the segment tree (which is equal to the summation of the suffix $$$\text{LCM}$$$ each multiplied by its left pointer, as we initially put value $$$i$$$ at leaf $$$i$$$), then multiplying it by the value of $$$r$$$ as it is a common multiplied constant, and that is the answer for this subproblem.
Now, to solve the original problem, $$$\displaystyle\sum_{L=l_j}^{r_j}\sum_{R=L}^{r_j}\text{LCM}(a_L, a_{L+1}, ..., a_{R}) \times L \times R$$$, notice that for each index $$$L$$$, we want the summation of its answers at $$$R=L, R=L+1, ..., R={r_j}$$$, meaning the summation of all its answers from its first update (at $$$r=L$$$) till the update at $$$r=r_j$$$, and the value after each update is multiplied by $$$r$$$ when taking this sum. So we need to find some way to store the summation of every value in a leaf $$$i$$$ after each sweepline from $$$r-1$$$ to $$$r$$$.
We can do that by using matrices. Suppose each leaf $$$i$$$ currently has two values $$$[x, h]$$$, where $$$x$$$ is currently the value of $$$\text{LCM}(a_i, a_{i+1}, ..., a_{r}) \times i$$$, and $$$h$$$ is the summation of each value of $$$x$$$ multiplied by its $$$r$$$ over all the updates. We can do two operations on a leaf $$$i$$$:
The first one is to multiply this leaf $$$i$$$ by a number $$$v$$$. To do this, we do the following tranformations :
So we multiply leaf the matrix $$$[x, h]$$$ in $$$i$$$ by the following transition matrix to transform it into the matrix $$$[x \cdot v, h]$$$:
The second one is to add the current value of $$$x$$$ of leaf $$$i$$$ multiplied by the current $$$r$$$ to the summation of historic values, we do the following tranformations :
So we multiply leaf the matrix $$$[x, h]$$$ in $$$i$$$ by the following transition matrix to transform it into the matrix $$$[x, h + x \cdot r]$$$:
So, we can initially store in leaf $$$i$$$ the matrix $$$[i, 0]$$$, and when doing updates on the values of $$$\text{LCM}$$$ when adding the primes of $$$a_r$$$, update by multiplying by the matrix of the first type above, and after finishing all updates multiply by the matrix of the second type above to update the value of the historic sum at each leaf. And the answer to the problem is just the summation of all historic values at each leaf from $$$l_j$$$ to $$$r_j$$$.
The following solution should be able to pass in the time limit of $$$5$$$ seconds if it was well-implemented, but just to optimize runtime, we can ignore multiplying by the lazy "values" (matrices) if they are equal to the identity matrix of size $$$2 \times 2$$$.
$$$O(n\log n \log(\max_A) + Q\log n)$$$.
#include <bits/stdc++.h>
#define pb push_back
#define sz(x) int(x.size())
#define bk(x) x.back()
using namespace std;
#pragma GCC optimize("Ofast","unroll-loops")
#pragma GCC target("popcnt")
const int SZ=2e5+7;
int MOD=998244353;
int add(int x,int y){
x+=y;if(x>=MOD)x-=MOD;
return x;
}
int mul(int x,int y){
return(x*1ll*y)%MOD;
}
array<int,4>lz[SZ*4],ntrl={1,0,0,1};
array<int,2>sg[SZ*4];
int spf[SZ*5],a[SZ],ans[SZ],q,n;
vector<array<int,2>>E[SZ*5],L[SZ];
array<int,2>MUL1(array<int,2>&x,array<int,4>&y){
array<int,2>ret;
ret[0]=add(mul(x[0],y[0]),mul(x[1],y[2]));
ret[1]=add(mul(x[0],y[1]),mul(x[1],y[3]));
return ret;
}
array<int,4>MUL2(array<int,4>&x,array<int,4>&y){
array<int,4>ret;
ret[0]=add(mul(x[0],y[0]),mul(x[1],y[2]));
ret[1]=add(mul(x[0],y[1]),mul(x[1],y[3]));
ret[2]=add(mul(x[2],y[0]),mul(x[3],y[2]));
ret[3]=add(mul(x[2],y[1]),mul(x[3],y[3]));
return ret;
}
void build(int i=1,int l=1,int r=n){
lz[i]=ntrl;
if(l==r){
sg[i]={l,0};
return;
}
int md=(l+r)>>1;
build(i*2,l,md),build(i*2+1,md+1,r);
sg[i][0]=add(sg[i*2][0],sg[i*2+1][0]);
sg[i][1]=add(sg[i*2][1],sg[i*2+1][1]);
}
void prp(int i){
if(lz[i]==ntrl)return;
for(int ch=i*2;ch<=i*2+1;ch++){
sg[ch]=MUL1(sg[ch],lz[i]);
lz[ch]=MUL2(lz[ch],lz[i]);
}
lz[i]=ntrl;
}
void upd(int l,int r,array<int,4>M,int i=1,int lx=1,int rx=n){
if(rx<l||r<lx)return;
if(lx>=l&&rx<=r){
sg[i]=MUL1(sg[i],M);
lz[i]=MUL2(lz[i],M);
return;
}
int md=(lx+rx)>>1;prp(i);
upd(l,r,M,i*2,lx,md),upd(l,r,M,i*2+1,md+1,rx);
sg[i][0]=add(sg[i*2][0],sg[i*2+1][0]);
sg[i][1]=add(sg[i*2][1],sg[i*2+1][1]);
}
void ml(int l,int r,int g){
array<int,4>T={g,0,0,1};
upd(l,r,T);
}
void hs(int l,int r){
array<int,4>T={1,r,0,1};
upd(l,r,T);
}
int qry(int l,int r,int i=1,int lx=1,int rx=n){
if(rx<l||r<lx)return 0;
if(lx>=l&&rx<=r){
return sg[i][1];
}
int md=(lx+rx)>>1;prp(i);
return add(qry(l,r,i*2,lx,md),qry(l,r,i*2+1,md+1,rx));
}
int C,p[20],e[20],V[20][20];
void pf(int x){
C=0;
while(x>1){
int P=spf[x];x/=P;
if(C&&p[C-1]==P)e[C-1]++,V[C-1][e[C-1]]=V[C-1][e[C-1]-1]*P;
else V[C][0]=1,V[C][1]=p[C]=P,e[C]=1,C++;
}
}
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
for(int i=0;i<SZ*5;i++)spf[i]=i,E[i]={{0,MOD}};
for(int i=2;i*i<SZ*5;i++){
if(spf[i]!=i)continue;
for(int j=i*i;j<SZ*5;j+=i)spf[j]=min(spf[j],i);
}
cin >> n;
for(int i=1;i<=n;i++)cin >> a[i];
build();
cin >> q;
for(int i=0;i<q;i++){
int l,r;cin >> l >> r;
L[r].pb({l,i});
}
for(int r=1;r<=n;r++){
pf(a[r]);
for(int i=0;i<C;i++){
int x=p[i],c=e[i];
E[x].pb({r,0});
while(c>bk(E[x])[1]){
auto[idx,f]=bk(E[x]);
E[x].pop_back();
auto[l,tmp]=bk(E[x]);
ml(l+1,idx,V[i][c-f]);
}
E[x].pb({r,c});
}
hs(1,r);
for(auto&[l,idx]:L[r]){
ans[idx]=qry(l,r);
}
}
for(int i=0;i<q;i++){
cout << ans[i] << "\n";
}
return 0;
}
This problem is a tribute to my dear friend, HossamHero7 💙
Initially, the problem was supposed to just ask for $$$\displaystyle\sum_{L=l_j}^{r_j}\sum_{R=L}^{r_j}\text{LCM}(a_L, a_{L+1}, ..., a_{R})$$$ without multiplying by the boundaries of the subarray, but tester ismail-but-noob mentioned that he saw it before in one of errichto's IOI camps, it turned out it appeared before in POI 2023, and errichto explained his solution to it in an unlisted youtube video and the official POI channel explained it in a public video (in polish probably). I noticed that both errichto and the official POI didn't mention anything related to matrices when talking on how to maintain the historic values. In reality maintaining historic values over update in a segment tree is usually done using and storing more lazy values that tell us how the updates are like, and sometimes it can't be solved with standard matrix multiplication, like GSS2 (beware though the problem statment here is kinda broken, so read the comments under the problem to know what the problem is actually asking for). Other problems might be extremely hard (if not impossible) to solve without facilitating it by using matrices, like NOIP2022p4. So I thought about making the problem harder to implement without matrices by adding the multiplication of $$$L, R$$$ (The boundaries of the subarray) to the solution.
D. KEL
output-only
math
Kel is just Kel; he is wasting his time solving useless problems.
Consider the case where $$$L= W = \infty$$$, the blanket will look like a cartesian plane.
One can notice that any straight line not passing by the origin point will intersect with at most $$$3$$$ quadrants, if it was passing by the origin point it will intersect in exactly $$$2$$$ opposite quadrants except if it was the $$$X \text{-axis}$$$ or the $$$Y \text{-axis}$$$, which will intersect with all $$$4$$$ quadrants.
So, the solution does not depend on the value of $$$L, W$$$, and is just equal to $$$2$$$.
$$$O(1)$$$.
#include <bits/stdc++.h>
using namespace std;
int L,W;
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> L >> W;
cout << 2;
}
This problem is a tribute to my dear friend, Ahmed_Solyman 💛
E. MARI
combinatorics
Try to solve when the elements of the array are distinct.
Try to find the contribution of each element.
We will solve the problem by counting the contribution of each element supposing elements are distinct. Suppose we are currently counting the contribution of element $$$i$$$, we want to know in how many configurations could element $$$a_i$$$ end as the final nutrition value. For each configuration of oven $$$j-1$$$ appearing after oven $$$i-1$$$, there can be two cases :
$$$a_j > a_i$$$, in which case Mari should choose the freezing configuration $$$\min$$$ because $$$\min(a_i, a_j) = a_i$$$
$$$a_j < a_i$$$, in which case Mari should choose the heating configuration $$$\max$$$ because $$$\max(a_i, a_j) = a_i$$$
As for oven $$$i-1$$$ itself, where $$$x$$$ is the nutrition value of the cookie before entering oven $$$i-1$$$, the value of $$$x$$$ too can be either smaller than $$$a_i$$$ or bigger than $$$a_i$$$, so simirally Mari can choose exactly one of $$$\min / \max$$$ functions to convert $$$x$$$ to $$$a_i$$$, but the choices for ovens from $$$1$$$ to $$$i-2$$$ can be arbitrary, so the contribution of $$$a_i$$$ is equal to $$$a_i \times 2^{i-2}$$$ (and for the initial nutrition value $$$a_1$$$, it has a contribution of $$$1$$$)
So the solution of the problem when elements are distinct is equal to $$$a_1 + \displaystyle\sum_{i=2}^n a_i \times 2^{i-2} \bmod 10^9 + 7$$$.
This is also in fact the solution for the original problem, the reason for that is that we can replace element $$$a_i$$$ with an ordered pair $$$(a_i, i)$$$, so when it gets to a succeeding element $$$(a_j, j)$$$ such that $$$a_i = a_j$$$, the $$$\min$$$ function produces element $$$a_i$$$ and the $$$\max$$$ function produces element $$$a_j$$$, and this does not affect the correctness of the solution as the value of the element is given priority in pair comparision.
$$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#pragma GCC optimize("Ofast","unroll-loops")
#pragma GCC target("popcnt")
const int SZ=2e5+7;
int MOD=1e9+7;
int add(int x,int y,int MOD=MOD){
x+=y;if(x>=MOD)x-=MOD;
return x;
}
int a[SZ],n,p[SZ];
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
p[0]=1;
for(int i=1;i<SZ;i++)p[i]=add(p[i-1],p[i-1]);
cin >> n;
int ans=0;
for(int i=0;i<n;i++){
int x;cin >> x;
ans=add(ans,(x*1ll*p[max(i-1,0)])%MOD);
}
cout << ans;
return 0;
}
There is another way to solve this problem using inclusion-exclusion principle, this is how tester AbdelmagedNour solved it, can you find this solution?
F. BASIL
game-theory
sqrt-decomposition
hashing
If there is exactly one type it is a standard nim game, try to solve when there are two types.
Let the accumlative XOR value of all elements with type $$$x$$$ $$$(1 \le x \le n)$$$ in a game be $$$k_x$$$ (i.e $$$k_x = \displaystyle\bigoplus_{i|t_i = x} a_i$$$)
Claim: A state is a losing state only if $$$k_x = 0$$$ for every $$$x$$$ from $$$1$$$ to $$$n$$$
Proof:
The base case losing state is when no seeds remain has $$$k = [0, 0, ..., 0]$$$
Suppose we have a set of types $$$X = \{x_1, x_2, ... x_m\}$$$ $$$(m \ge 1, x_1 < x_2 < ... < x_m)$$$ such that $$$k_x \ne 0$$$ if $$$x \in X$$$ and $$$k_x = 0$$$ otherwise, we can go from this state to $$$k = [0, 0, ..., 0]$$$ by choosing some flower pot with type $$$x_m$$$ and then decreasing it such that the total XOR $$$k_{x_m}$$$ becomes $$$0$$$ (This is possible as it is normal nim game on type $$$x_m$$$), and then setting all the flower pots with type less than $$$x_m$$$ to $$$0$$$.
We can not go from state $$$k = [0, 0, ..., 0]$$$ to state $$$k = [0, 0, ..., 0]$$$ again, as decreasing the number of seeds in any flower pot will distort the XOR value of its seed type making it non-zero, and there is no way to change the number of seeds in another flower pot of the same type.
So, every winning state can go to a losing state, and every losing state can't go to another losing state.
So we just need some fast method to check if array $$$k$$$ is equal to the zero array or not when are in the subarray $$$[l, r]$$$, there are two methods to do that:
Method 1: Determinstic offline MO's algorithm in $$$O((n + q) \sqrt n)$$$
We can use MO's algorithm to sweepline on the array fast and calculate the answer of every query in some order. When we add or remove an element $$$i$$$ from the current array $$$k$$$, we do $$$k_{t_i} := k_{t_i} \oplus a_i$$$, and keep track of the number of non-zero elements in $$$k$$$.
For each query, if the number of non-zero elements in $$$k$$$ is non-zero, the state is a winning state and the winner is Sunny, otherwise the winner is Basil.
Method 2: Non-determinstic online array hashing in $$$O(n + q)$$$
We can use hashing to store the array $$$k$$$ for every prefix of the array $$$a$$$ quickly. Let $$$p_i = \text{hash}(k|k_x=\displaystyle\bigoplus_{j|t_j = x, j \le i} a_j)$$$.
For a subarray $$$[l,r]$$$ we want to know if $$$k_x$$$ at prefix $$$r$$$ XOR $$$k_x$$$ at prefix $$$l-1$$$ is equal to $$$0$$$ for every $$$x$$$ or not, this can only happen if $$$k_x$$$ at prefix $$$r$$$ is equal to $$$k_x$$$ at prefix $$$l-1$$$ (i.e $$$p_r = p_{l-1}$$$). So if $$$p_r = p_{l-1}$$$, Basil is the winner, otherwise Sunny is the winner.
Since hashing gives an integer value limited to C++ int
or long long
, there is a chance of collision between two unequal array configurations of $$$k$$$. But this chance is very low and very hard to achieve in practice.
Method $$$1$$$: $$$O((n+q) \sqrt n)$$$.
Method $$$2$$$: $$$O(n+q)$$$
#include <bits/stdc++.h>
using namespace std;
const int SZ=2e5+7;
int n,q,C,B;
int a[SZ],t[SZ],X[SZ];
bool ans[SZ];
array<int,3>Q[SZ];
void upd(int i){
int T=t[i];
if(X[T])C--;
X[T]^=a[i];
if(X[T])C++;
}
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> n,B=sqrt(n);
for(int i=1;i<=n;i++)cin >> a[i];
for(int i=1;i<=n;i++)cin >> t[i];
cin >> q;
for(int i=0;i<q;i++){
int l,r;cin >> l >> r;
Q[i]={l,r,i};
}
sort(Q,Q+q,[&](array<int,3>a,array<int,3>b){
int x=a[0]/B,y=b[0]/B;
if(x==y){
if(x&1)return a[1]>b[1];
return a[1]<b[1];
}
return x<y;
});
int l=1,r=0;
for(int i=0;i<q;i++){
auto[lq,rq,idx]=Q[i];
while(r<rq)r++,upd(r);
while(l>lq)l--,upd(l);
while(r>rq)upd(r),r--;
while(l<lq)upd(l),l++;
ans[idx]=C;
}
for(int i=0;i<q;i++)cout << (ans[i]?"SUNNY\n":"BASIL\n");
}
#include <bits/stdc++.h>
using namespace std;
const int SZ=2e5+7,MOD=1e9+7;
const long long B=2498593;
int n,q;
int a[SZ],t[SZ];
long long x[SZ],b[SZ],h[SZ];
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> n;
b[0]=1;
for(int i=1;i<SZ;i++)b[i]=(b[i-1]*B)%MOD;
for(int i=1;i<=n;i++)cin >> a[i];
for(int i=1;i<=n;i++){
cin >> t[i];
h[i]=h[i-1]+(((x[t[i]]^a[i])-x[t[i]])*b[t[i]])%MOD;
if(h[i]<0)h[i]+=MOD;
if(h[i]>=MOD)h[i]-=MOD;
x[t[i]]^=a[i];
}
cin >> q;
for(int i=0;i<q;i++){
int l,r;cin >> l >> r;
if(h[r]==h[l-1])cout << "BASIL\n";
else cout << "SUNNY\n";
}
}
This problem is a tribute to my dear best friend, yahia 💚
G. OMORI
interactive
adhoc
Since there are only $$$3$$$ main types other than the OMORI doll, try to eliminate each type one by one first.
Suppose there are $$$X$$$ Kel-based dolls, $$$Y$$$ Aubrey-based dolls, $$$Z$$$ Hero-based dolls, $$$1$$$ OMORI doll.
$$$(X,Y,Y \ge 1)$$$
$$$X+Y+Z+1=n$$$.
First, consider asking the queries ? 1 i
for every $$$i$$$ from $$$2$$$ to $$$n$$$, if OMORI answers all of them by $$$0$$$ then that means that doll $$$1$$$ can not beat any other doll, so it has to be the OMORI doll. Otherwise, assume for simplicity that doll $$$1$$$ is Kel-based. Any query where OMORI answers with $$$1$$$ means that doll $$$i$$$ is a doll that Kel-based dolls can beat, which are Aubrey-based dolls. Mark doll $$$1$$$ and all dolls that doll $$$1$$$ can beat (Aubrey-based dolls) as non-OMORI dolls, and choose one of the Aubrey-based dolls $$$A$$$. This uses exactly $$$n-1$$$ queries.
After that, Ask ? A i
for all $$$i$$$ $$$(1 \le i \le n)$$$ such that $$$i$$$ is not yet marked as a non-OMORI doll. any query where OMORI answers with $$$1$$$ means that doll $$$i$$$ is a doll that Aubrey-based dolls can beat, which are Hero-based dolls. Mark all dolls that doll $$$G$$$ can beat (Hero-based dolls) as non-OMORI dolls, and choose one of the Hero-based dolls $$$H$$$. This uses exactly $$$X+Z$$$ queries
After that, Ask ? H i
one by one for all $$$i$$$ $$$(1 \le i \le n)$$$ such that $$$i$$$ is not yet marked as a non-OMORI doll, if OMORI answers a query with $$$0$$$, that means that this doll $$$i$$$ is the OMORI doll, because all the other remaining unmarked dolls are Kel-based dolls which Hero-based dolls can beat, so if it's not a Kel-based doll it has to be an OMORI doll. You can save one query by avoiding to ask a question if there is exactly one unmarked doll left, this doll has to be the OMORI doll as all other $$$n-1$$$ dolls have been marked as a non-OMORI doll. This uses $$$X-1$$$ queries atmost.
This solution uses atmost $$$n-1+X+Z+X-1$$$ queries, which is $$$3n-7$$$ queries in the worst case.
Time complexity : $$$O(n)$$$.
Used number of operations : $$$3n-7$$$.
#include <bits/stdc++.h>
#define pb push_back
#define sz(x) int(x.size())
#define bk(x) x.back()
#define fsh cout.flush()
using namespace std;
int n;
vector<int>v,w;
int ask(int i,int j){
cout << "? " << i << ' ' << j << endl;fsh;
int x;cin >> x;
return x;
}
int main(){
ios_base::sync_with_stdio(0);cin.tie(0);
cin >> n;
int G=-1;
for(int i=2;i<=n;i++){
if(!ask(1,i))v.pb(i);
else G=i;
}
if(G==-1){
cout << "! " << 1 << endl;fsh;
return 0;
}
int H;
for(auto&i:v){
if(!ask(G,i)){
w.pb(i);
}
else H=i;
}
int x=bk(w);
w.pop_back();
for(auto&i:w){
if(!ask(H,i)){
cout << "! " << i << endl;fsh;
return 0;
}
}
cout << "! " << x << endl;fsh;
return 0;
}
Can you find a solution in less than $$$3n-7$$$ queries? (Randomized solutions are allowed).
Feel free to provide any feedback or constructive criticism in the comments (For the problem ideas, problem setting, problem statements, the editorial itself, etc..) as it will be useful for me to create contests in the future 🙂 (Also if you don't mind, don't forget to upvote the blog, I want to be a top contributor for once in my life 🥺).
Problems are very nice, thank you for the contest
great contest, well done !
I really need to learn how to generate good testcases lol
for about 40 mins i thought i was the worst person at implementing mo (to be fair i still am)
I think you saw my solution
I solved $$$F$$$ using probabilities.
Let's assume that $$$S$$$ is a random subset of all types. It's obvious that if the winner is BASIL (i.e., the XOR sum of each type in $$$[l, r]$$$ is $$$0$$$), then $$$S$$$'s XOR sum is also $$$0$$$.
If at least one type's XOR sum is greater than $$$0$$$, the probability that it does not belong to the subset $$$S$$$ is at most $$$\frac{1}{2}$$$. I used 30 random subsets, so the probability of getting a wrong answer is less than $$$\frac{1}{2^{30}}$$$.
[submission:286472505]
Excellent contest. Thanks my dear friend Hexagons for making this beautiful work.
BONUS: Opinions on the statement/story of each problem?