This is the editorial for the recent Teamscode Summer 2023 contest, and the problems are open for upsolving on this [gym](https://codeforces.me/gym/104520). Problems were prepared by [user:oursaco,2023-08-22], [user:dutin,2023-08-22], [user:thehunterjames,2023-08-22], [user:Bossologist,2023-08-22], [user:Esomer,2023-08-22], [user:danx,2023-08-22], [user:codicon,2023-08-22], [user:willy108,2023-08-22], and [user:hyforces,2023-08-22]. The problems were tested by [user:omeganot,2023-08-22], [user:codicon,2023-08-22], [user:cry,2023-08-22], [user:skye_,2023-08-22], [user:Litusiano_,2023-08-22], and [user:apple_method,2023-08-22].↵
↵
### [A. Who is cooking?](https://codeforces.me/gym/104520/problem/A)↵
↵
<spoiler summary="Solution">↵
danx↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
print("Esomer")↵
~~~~~↵
</spoiler>↵
↵
### [B. Restaurant Sorting](https://codeforces.me/gym/104520/problem/B)↵
↵
<spoiler summary="Solution">↵
↵
The answer is $n - $ the longest prefix of the array where all the elements are in the right position. All the elements are in the do not need to the popped out of the stack, and you must pop out at least as many as to fix the first mistake. Final complexity $O(n \log n)$.↵
↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <vector>↵
#include <algorithm>↵
↵
using namespace std;↵
↵
int main(){↵
cin.tie(0) -> sync_with_stdio(0);↵
int T; ↵
cin >> T;↵
while(T--){↵
int n;↵
cin >> n;↵
vector<int> arr(n);↵
for(int& x : arr)↵
cin >> x;↵
vector<int> arr2 = arr;↵
sort(arr2.begin(), arr2.end());↵
int cnt = 0;↵
for(int i = 0; i<n; i++){↵
if(arr[i] == arr2[i]){↵
cnt++;↵
}else{↵
break ;↵
}↵
}↵
cout << n - cnt << "\n";↵
}↵
return 3366^3366;↵
}↵
~~~~~↵
</spoiler>↵
↵
### [C. Largest Palindromic Subsequence](https://codeforces.me/gym/104520/problem/C)↵
↵
<spoiler summary="Hint 1">↵
Look carefully at the samples. Do you notice a pattern? Try manually finding the answer for other strings.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
**Lemma 1:** The largest palindromic subsequence is the largest character in the string, repeated the number of times it appears in the string.↵
↵
Proof: Suppose the largest character appears $k$ times in total. Suppose there is some palindromic substring lexicographically greater than the string described above. Then, in order to preserve the palindrome property, the position of the first occurrence of a character that is not the greatest character must be less than $k/2$. Thus, the string formed by the greatest characters repeated the maximal number of times is lexicographically greater. $\blacksquare$↵
↵
By Lemma 1, we can simply perform two sweeps over the array to find the maximum character and its number of occurrences.↵
↵
**Time Complexity:** $\mathcal{O}(\lvert s\rvert)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
import sys↵
input = sys.stdin.readline↵
↵
for tc in range(int(input())):↵
s = input()↵
print(max(s)*s.count(max(s)))↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [D. Yet Another Math Query Problem](https://codeforces.me/gym/104520/problem/D)↵
↵
<spoiler summary="Hint 1">↵
Note that $a \leq b$.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Imagine you have a good pair $(c, d)$. What will the other good pairs look like?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
$a + b + |a - b| = a + b + (b - a) = 2*b.$↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
What happens if $x$ is odd or if $x/2$ is not on the range $[l, r]$?↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve each query, we will check if $x$ it's odd and if also if $x/2$ is contained in the range $[l, r]$.↵
↵
If not, the answer will be $0$, otherwise it will be $x/2 - l + 1$.↵
↵
Complexity of the solution: $\mathcal{O}(q)$.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <iostream>↵
using namespace std;↵
↵
#define ll long long↵
↵
int q;↵
ll l, r, x, ans;↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> q;↵
while (q--) {↵
cin >> l >> r >> x;↵
if ((x&1) || (x>>1) < l || (x>>1) > r) ans = 0;↵
else ans = (x>>1) - l + 1;↵
cout << ans << "\n";↵
}↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [E. Evil Problemsetters](https://codeforces.me/gym/104520/problem/E)↵
↵
<spoiler summary="Hint 1">↵
Think about how each clue discards some values, and how to use them to discard every value except $x$.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Can you use the prime decomposition of $x$ to know the answer? Try to think of a construction.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
We need to keep discarding possible values of $x$ until there's only one possible $x$.With this in mind, there are two methods for discarding elements:↵
↵
↵
1. A clue with $c = 1$ and $|a-b| = d$ discards every value $y$ such that $y \nmid d$.↵
2. A clue with $c = 0$ and $|a-b| = d$ discards every value $y$ such that $y \mid d$.↵
↵
↵
The first observation is that we need at least one clue with $c = 1$, because we can't discard every other element otherwise, as there are infinite primes.↵
↵
Additionally, we need to discard all the divisors of $x$ except $x$ itself. Let $k$ be the number of different prime factors of $x$, then the minimum number of clues needed to discard all the divisors is $k$. It can't be less than $k$, because to discard each number of the form $\frac{x}{p}$ where $p$ is some prime in the prime decomposition of $x$, we need to directly use method $2$ with them. We can't discard two numbers of this form at once, because then the product would be multiple of $x$.↵
↵
So, the minimum number of clues is $k$ and we discard all the numbers of the form $\frac{x}{p}$ and its divisors. Furthermore, this is enough to discard all the divisors of $x$.↵
↵
<spoiler summary="Proof">↵
If there's a number $y$ different than $x$ such that $y$ divides $x$, then the set of primes that divide $y$ is a subset of the set of primes that divide $x$. ↵
↵
↵
- If it is a proper subset, let $p$ be the prime that divides $x$ but doesn't divide $y$. When considering $\frac{x}{p}$ we are already discarding $y$. ↵
- If it is the same set, at least one of the primes in $y$ must be raised to a lower exponent than in $x$. Let one of those primes be $p$, then we are also discarding $y$ when considering $\frac{x}{p}$.↵
↵
↵
</spoiler>↵
↵
With this, we have proven a lower bound of $k + 1$ ($k$ is defined above) clues for the answer, and we've proved that it is possible to achieve this lower bound by giving a clue with $c = 1$ and $|a - b| = x$, and then discarding all the divisors of $x$ different than $x$ by giving clues with $c = 0$ and $|a - b| = \frac{x}{p}$, where $p$ takes the value of each different prime that divides $x$. Hence, this is always an optimal answer.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include<bits/stdc++.h>↵
↵
using namespace std;↵
↵
#define endl "\n"↵
↵
typedef long long ll;↵
typedef long double ld;↵
↵
const int MOD = 1e9 + 7;↵
↵
int lw[(int)1e6+1]; //Lowest prime factor that divides each index.↵
↵
void solve(){↵
int x; cin >> x;↵
int org = x;↵
vector<int> ans;↵
while(x > 1){↵
int val = lw[x];↵
ans.push_back(val);↵
while(x % val == 0){↵
x /= val;↵
}↵
}↵
cout << (int)ans.size() + 1 << endl;↵
cout << 1 << " " << 1 + org << " " << 1 << endl;↵
for(int y : ans) cout << 1 << " " << 1 + org/y << " " << 0 << endl;↵
}↵
↵
int main(){↵
ios_base::sync_with_stdio(0);↵
cin.tie(0);↵
↵
for(int i = 1; i <= 1e6; i++) lw[i] = i;↵
for(ll i = 2; i <= 1e6; i++){↵
if(lw[i] != i) continue;↵
for(ll j = i*i; j <= 1e6; j += i){↵
lw[j] = min(lw[j], (int)i);↵
}↵
}↵
↵
int tt; cin >> tt;↵
//~ int tt = 1; ↵
for(int t = 1; t <= tt; t++){↵
solve();↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [F. Maximum Trust](https://codeforces.me/gym/104520/problem/F)↵
↵
<spoiler summary="Solution">↵
One method that you might have tried at first is going through each tester and simply keeping track of the maximum possible trust achievable. However this solution alone is not sufficient since sometimes it's more optimal to actually take on a non-positive trust score instead of a positive one. This is because multiplying two negatives creates a positive, and having a large magnitude negative trust value can potentially produce a large positive value later. ↵
↵
This observation motivates the idea of maintaining two values instead of just one when going through the testers: the maximum possible trust, and the minimum possible (which is always non-positive since $0$ trust is always achievable). At each tester you compute all possible trust scores you can achieve using the two maintained values and either multiplying or adding the next one, and recalculate the new minimum and maximum scores. It can be shown that keeping track of just those two values is sufficient, as it is never possible to get a strictly higher/smaller score from a non-maximum/minimum score.↵
↵
Complexity of the solution: $\mathcal{O}(N)$.↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <algorithm>↵
using namespace std;↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int T;↵
cin >> T;↵
↵
while (T--) {↵
int N;↵
cin >> N;↵
↵
long long pos = 0;↵
long long neg = 0;↵
for (int i = 0; i < N; i++) {↵
long long a;↵
cin >> a;↵
↵
//prev pos and neg↵
long long pos_temp = pos;↵
long long neg_temp = neg;↵
↵
pos = max(max(0ll, pos_temp + a), pos_temp * a);↵
pos = max(max(pos, neg_temp + a), neg_temp * a);↵
neg = min(min(0ll, neg_temp + a), neg_temp * a);↵
neg = min(min(neg, pos_temp + a), pos_temp * a);↵
}↵
↵
cout << pos << "\n";↵
}↵
return 0;↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [G. Maximum Xor](https://codeforces.me/gym/104520/problem/G)↵
↵
↵
<spoiler summary="Solution">↵
Note that when evaluating $(v+x)\oplus (v+y)$, the highest bit matters the most — that is to say, when solving the problem we should enumerate from the most significant bit to the lowest bit. Maximizing the highest bit we can takes precedence over maximizing any bits that come after it, as the high bit alone contributes more to our answer than all bits after it combined. So now we numerate the bits from high to low, considering if we can force it to be on in the final result.↵
↵
Let the current considered bit be the $k$th bit. If $2^k\leq x,y$ then it is impossible for this bit to be on — subtract $2^k$ from both $x$ and $y$ and continue on to the lower bits. Also, WLOG $x>y$. If $x=y$, the result is simply $0$. If $a+x\geq 2^k$ but $a+y<2^k$ for some $0\leq a<z$, then it is possible to force the current bit on. This range exists if $x\neq y$ and $z-1+x\geq 2^k$. The main observation afterwards is that by setting $a$ to be in the range where $a+x\geq 2^k$ but $a+y<2^k$ is essentially the same as adding some prefix value to both $x$ and $y$, and then decreasing the current value of $z$. To illustrate, the two earlier inequalities turn into $2^k-x\leq a<2^k-y$. These are now both required in order to maximize our answer. We can simulate the lower bound by setting $x:=x+max(0,(2^k-x))$ and $y:=y+max(0,(2^k-x))$, and then setting $z:=min(z-max(0,(2^k-x)),(2^k-y)-max(0,(2^k-x)))$. Then, subtract $2^k$ from $x$ as this bit is now irrelevant, and it is back to the same problem, only with lower bits. Note that the $max(0,(2^k-x))$ is necessary, as it is forced that $v\geq 0$.↵
↵
↵
For the full details, one can refer to the solution code. The final complexity is $O(\log max(x,y,z))$ per case.↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define B 30↵
int x,y,z;↵
void solve(){↵
cin>>x>>y>>z;↵
//(v+x)^(v+y) s.t. 0<=v<z↵
if(x==y){↵
cout<<0<<endl;↵
return;↵
}↵
int cans=0;↵
for(int a=B-1;a>=0;--a){↵
if(x==y)break;↵
if(x>=(1<<a)&&y>=(1<<a)){↵
x-=(1<<a);↵
y-=(1<<a);↵
continue;↵
}↵
if(z-1+x>=(1<<a)||z-1+y>=(1<<a)){↵
if(x<y)swap(x,y);↵
z=min(z-max(0,((1<<a)-x)),((1<<a)-y)-max(0,((1<<a)-x)));↵
int av=max(0,(1<<a)-x);↵
x+=av;↵
y+=av;↵
x-=(1<<a);↵
cans+=(1<<a);↵
}↵
}↵
cout<<cans<<endl;↵
}↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
### [H. Permutator](https://codeforces.me/gym/104520/problem/H)↵
↵
<spoiler summary="Hint 1">↵
Note that every index $i$ contributes independently to the answer.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
In how many subarrays is contained the index $i$?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Each index $i$ contributes $a_{i} \cdot b_{i} \cdot sub_{i}$ where $sub_{i}$ is the number of subarrays containing $i$.↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
As $a$ is fixed, we know how many times each index $i$ contributes to the answer except for the values in $b$, and it's obvious that it's always better to match the highest contributing values in $a$ with the lowest ones in $b$ to minimize the answer.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve this problem we will create a new array $a'$ where $a'[i] = a[i] * (i+1) * (n-i)$ where $(i+1) * (n-i)$ is the number of subarrays that contains the index $i$, and we will sort it in ascending order.↵
↵
Then we will sort $b$ in descending order and the answer will be $\sum_{i = 0}^{n-1} a'_{i} \cdot b_{i}$↵
↵
For formalism, this can be proved with the arrangement inequality↵
↵
Complexity of the solution: $\mathcal{O}(n \cdot log{(n)})$.↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
#define int long long↵
↵
const int N = 1e5;↵
↵
int n, ans, a[N], b[N];↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> n;↵
for (int i = 0; i < n; i++) {↵
cin >> a[i];↵
a[i] *= (i+1)*(n-i);↵
}↵
sort(a, a+n);↵
↵
for (int i = 0; i < n; i++) {↵
cin >> b[i];↵
}↵
sort(b, b+n);↵
↵
ans = 0;↵
for (int i = 0; i < n; i++) {↵
ans += a[i]*b[n-i-1];↵
}↵
cout << ans << "\n";↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [I. Counting Palindromic Sequences](https://codeforces.me/gym/104520/problem/I)↵
↵
<spoiler summary="Hint 1">↵
Try to formulate an $\mathcal{O}(n^2)$ DP for the first subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Optimize the transitions from $\mathcal{O}(n)$ to $\mathcal{O}(1)$.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
Let $f_i$ denote the number of palindromic subsequences with sum $i$ that do not contain any instances of $k$, and let $g_i$ denote the number of palindromic subsequences with sum $i$ that contains at least one instance of $k$. Then, we have the recurrences↵
↵
$$↵
\begin{align*}↵
f_i=\sum_{x=1}^{\lfloor i/2\rfloor} f_{i-2x} - f_{i-2k}↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
g_i=\sum_{x=1}^{\lfloor i/2\rfloor} g_{i-2x} + f_{i-2k}↵
\end{align*}↵
$$↵
↵
Note that for convenience, we define $f_i=g_i=0$ for $i<0$. In other words, each transition is the sum of all the even or all the odd indicies before $i$ of $f$ or $g$. We can use counter variables to avoid recalculating this sum at every iteration, making transitions $\mathcal{O}(1)$.↵
↵
**Time Complexity:** $\mathcal{O}(n)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
↵
int Z;↵
↵
class mint {↵
long long x;↵
↵
public:↵
mint(long long _x) : x(_x % Z) {}↵
↵
mint &operator+=(const mint &n) { ↵
x += n.x;↵
if (x >= Z)↵
x -= Z;↵
return *this;↵
}↵
↵
mint &operator-=(const mint &n) {↵
x -= n.x;↵
if (x < 0)↵
x += Z;↵
return *this;↵
}↵
↵
mint &operator++() {↵
x++;↵
if (x == Z)↵
x = 0;↵
return *this;↵
}↵
↵
int val() { return x; }↵
};↵
↵
mint operator+(mint a, const mint &b) { return a += b; }↵
mint operator-(mint a, const mint &b) { return a -= b; }↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
int n, k;↵
std::cin >> n >> k >> Z;↵
std::vector<mint> f(n+1, 0), g(n+1, 0);↵
if (k == 1) ↵
f[0] = g[1] = 1;↵
else ↵
f[0] = f[1] = 1;↵
mint fs0 = f[0], fs1 = f[1];↵
mint gs0 = g[0], gs1 = g[1];↵
for (int i = 2; i <= n; i++) {↵
if (i == k)↵
++g[i];↵
else↵
++f[i];↵
f[i] += fs0 - (i < 2 * k ? 0 : f[i - 2 * k]);↵
g[i] += gs0 + (i < 2 * k ? 0 : f[i - 2 * k]);↵
fs0 += f[i], gs0 += g[i];↵
std::swap(fs0, fs1);↵
std::swap(gs0, gs1);↵
}↵
↵
std::cout << g[n].val() << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [J. TeamsCode Meetings](https://codeforces.me/gym/104520/problem/J)↵
↵
<spoiler summary="Hint 1">↵
Try to construct a graph that represents this problem, how would edges work? What is a common property of all meetings that Bossologist must attend?↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
First come up with a solution for just one day of the week, to make it easier pretend the problem just asked you about day $1$.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Think about why it matters which day of the week the idea gets created, and how exactly it affects which meetings are productive.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
The structure of this problem strongly suggests that the solution has something to do with a graph, so let's go ahead and try using one. We let the days of the week be nodes, and we say an edge exists between two nodes if there is a setter that goes to both days' meetings. ↵
↵
From here it might be tempting to presume that the answer is just the number of connected components in the graph, but it's important to realize that the edges are actually _directed_ rather than undirected. This is because the nodes represent points in time rather than space, and since time can only flow in one direction a problem setter can't retroactively make a previous meeting productive. The nuance here is that there are still two directed edges between any two days $a$ and $b$ which a problem setter attends. This may seem contradictory to the previous statement, but it's actually because there's one directed edge between $a$ and $b$, and another between day $b$ and day $a$ of the **following week.** So it might be more useful to visualize a graph with $2N$ nodes labeled $1, 2,\ldots, 2N$, the second $N$ nodes representing days in the week after an idea is created.↵
↵
For now let's just solve for when the problem idea is created on day $1$. Note that since we only care about meetings within $N$ days of the idea's conception, we just need to worry about edges starting and ending within the first $N$ nodes. Using this graph representation, it becomes clear that Bossologist must go to all meetings with in-degrees of $0$. ↵
↵
Now the question is how to efficiently compute the answer for the rest of the days of the week. Note that we always only care about edges that start or end within $N$ days of the idea's conception, so the only difference between an idea being created on day $i + 1$ rather than $i$ are the two nodes at the end. More specifically, we delete the edges coming out of node $i$ and add in the edges going to node $N + i$. Using this method we can easily maintain the number of meetings with a $0$ in-degree, and thus we have our solution.↵
↵
If you let $P$ be the sum of all $p_i$, the time complexity is $\mathcal{O}(N + P)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <map>↵
#include <vector>↵
#include <memory>↵
using namespace std;↵
#define MAXM 100001↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int N, M;↵
cin >> N >> M;↵
↵
//for every start point have a list of all end points↵
vector<int> ep[MAXM];↵
vector<int> deg(N, -1);↵
↵
for (int i = 0; i < M; i++)↵
{↵
int p;↵
cin >> p;↵
↵
vector<int> d(p);↵
for (int j = 0; j < p; j++)↵
{↵
cin >> d[j];↵
d[j]--; ↵
if (deg[d[j]] == -1) deg[d[j]] = 0;↵
↵
if (j) ↵
{↵
ep[d[j - 1]].push_back(d[j]);↵
deg[d[j]]++;↵
}↵
}↵
if (p > 1)↵
{↵
ep[d[p - 1]].push_back(d[0]);↵
deg[d[0]]++;↵
}↵
}↵
↵
int ans = 0;↵
↵
//get init answer↵
for (int i = 0; i < N; i++)↵
if (deg[i] == 0) ans++;↵
↵
vector<int> deg_temp = deg;↵
↵
//get answer for 0↵
for (int i = 0; i < N; i++)↵
{↵
for (int j : ep[i])↵
{↵
if (j < i) deg_temp[j]--;↵
if (deg_temp[j] == 0) ans++;↵
}↵
}↵
↵
cout << ans << "\n";↵
↵
for (int i = 1; i < N; i++)↵
{↵
if (deg_temp[i - 1] == 0 && deg[i - 1] != 0) ans--;↵
deg_temp[i - 1] = deg[i - 1];↵
↵
for (int j : ep[i - 1])↵
{↵
deg_temp[j]--;↵
if (deg_temp[j] == 0) ans++;↵
}↵
↵
cout << ans << "\n";↵
}↵
return 0;↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [K. Med and Mex](https://codeforces.me/gym/104520/problem/K)↵
↵
<spoiler summary="Hint 1">↵
When can a good subarray have a value of $x$?↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
A good subarray will have value $x$ iff it has length $2(x-1)$ and contains $1 \dots x-1$ and $x-1$ elements from $x+1 \dots n$.↵
↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
For each value $x$, count the number of permutations that have it as value for a good subarray. Fix the elements in the subarray, and fix the elements outside of the subarray.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
↵
If $x = 1$ of $x > \frac{n}{2}$, there are $0$ good subarrays with value $x$.↵
↵
Otherwise, let $a = (2x - 2)! \cdot {{n - (x - 1)}\choose{x - 1}}$ and $b = (n - (2x - 2) + 1)!$. The answer for $x$ should be $a \cdot b$.↵
↵
Final complexity is $O(n)$ or $O(n \log n)$ depending on how you compute binomial coefficients.↵
↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
//written by omeganot↵
#include <bits/stdc++.h>↵
↵
using namespace std;↵
↵
using ll = long long;↵
const int MOD = 1E9 + 7;↵
const int INF = 1E9; const ll INFLL = 1E18;↵
↵
const int MAX = 1E5;↵
↵
int fact[MAX + 1]; ↵
int ifact[MAX + 1];↵
↵
int binPow(int base, int exp) {↵
int ans = 1;↵
while(exp) {↵
if(exp % 2) {↵
ans = (1LL * ans * base) % MOD;↵
}↵
base = (1LL * base * base) % MOD;↵
exp /= 2;↵
}↵
return ans;↵
}↵
↵
int nCk(int n, int k) {↵
if(n < k) {↵
return 0;↵
}↵
return (((1LL * fact[n] * ifact[k]) % MOD) * ifact[n - k]) % MOD;↵
}↵
↵
int n;↵
↵
int main() {↵
ios_base::sync_with_stdio(false);↵
cin.tie(0);↵
fact[0] = 1;↵
for(int i = 1; i <= MAX; i++) {↵
fact[i] = (1LL * fact[i - 1] * i) % MOD;↵
}↵
ifact[MAX] = binPow(fact[MAX], MOD - 2);↵
for(int i = MAX - 1; i >= 0; i--) {↵
ifact[i] = (1LL * ifact[i + 1] * (i + 1)) % MOD;↵
}↵
cin >> n;↵
vector<int> ans(n);↵
for(int i = 1; n - (i + 1) >= i; i++) {↵
int ways = (1LL * nCk(n - (i + 2), i - 1) * fact[2 * i]) % MOD;↵
ans[i] = (1LL * ways * fact[n - 2 * i + 1]) % MOD;↵
}↵
for(int i : ans) {↵
cout << i << " ";↵
}↵
cout << "\n";↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [L. Easy Tree Problem](https://codeforces.me/gym/104520/problem/L)↵
↵
<spoiler summary="Solution">↵
There are a few initial observations — we never paint the ancestor of a node after painting the node, as then painting the node itself was a wasted operation. This suggests a DFS from the root downwards, choosing to paint the current node to its intended final color or not.↵
↵
Many people did this problem incorrectly during the contest using a greedy, where you only paint nodes that are not the correct color. It must be noted that it is sometimes optimal to paint a node that is already in the correct color, as it paints the entire subtree as well, which might be cheaper to do at the current node than at the nodes in its subtree.↵
↵
This can be done by considering a DP instead. For each node, there are several states it can already be due to painting operations on its ancestors. It can be painted to its correct final color, never been painted by its ancestor, or painted by a color that is not its correct final color. The DP transitions are not difficult and can be found by simply testing whether you paint the node its final color or not(there is no reason to ever paint a node a color that is not its intended final color). Note that we do not care about the exact color if this node has already been painted and the color is not the correct final color — we will paint this node the correct color and erase the other color, so it doesn't matter.↵
↵
The transitions are all tree edges and the DP states are all tree nodes, so the complexity is $O(n)$ per case.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define int long long↵
#define inf 0x3f3f3f3f3f3f3f3f↵
#define N 200100↵
int n;↵
int icol[N],fcol[N];↵
int par[N];↵
vector<int>adj[N];↵
int cost[N];↵
int dp[N][3];//ancestor never set, ancestor set to same as current, ancestor set to rando↵
void dfs(int u){↵
for(auto a:adj[u]){↵
dfs(a);↵
}↵
dp[u][0]=dp[u][1]=dp[u][2]=inf;↵
int ca=cost[u];↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][0]=min(dp[u][0],ca);↵
if(fcol[u]==icol[u]){↵
ca=0;↵
for(auto a:adj[u]){ca+=dp[a][0];}↵
dp[u][0]=min(dp[u][0],ca);↵
}↵
ca=0;↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][1]=min(dp[u][1],ca);↵
ca=cost[u];↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][2]=min(dp[u][2],ca);↵
}↵
void solve(){↵
for(int a=0;a<n;++a)adj[a].clear();↵
cin>>n;↵
for(int a=1;a<n;++a){↵
cin>>par[a];↵
adj[--par[a]].push_back(a);↵
}↵
for(int a=0;a<n;++a)cin>>cost[a];↵
for(int a=0;a<n;++a)cin>>icol[a];↵
for(int a=0;a<n;++a)cin>>fcol[a];↵
dfs(0);↵
cout<<dp[0][0]<<endl;↵
}↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [M. Gift Wrapping](https://codeforces.me/gym/104520/problem/M)↵
↵
<spoiler summary="Hint 1">↵
The area asked for in the problem can be found by repeatedly merging overlapping gifts into the smallest rectangle that covers both of them. ↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Let's say we have a set of non-intersecting gifts. If we want to insert a new gift, then we can just find any overlapping gift, merge the two gifts into a bigger gift, then repeat the process with the bigger gift. This will remove all intersections.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Since the gifts are added in increasing order of right edge, to find any intersecting gift with a new gift, we can query for the gift with the largest right edge such that it overlaps with the new gift's right edge on the y-axis and also is greater than the new gift's left edge.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Our problem reduces to supporting a data structure that can perform range max queries, range set updates, and undoing a previous update. We can do this using a segtree but instead of propagating updates, we will consider all updates in the subtree of a node and on the path from the node to the root and take the update with most recent time out of those. Since the set updates are also guaranteed to be in increasing order, we do not even have to store the time. We can maintain a stack at each node in the segment tree as well as which updates have been rolled back. Whenever we perform an update, we add to the stack of every node that is touched by the update. Whenever we rollback an update, we can continually pop from the stack of every node that was touched by the update while the top of the stack has been rollbacked. This guarantees that each node is always up to date. The final complexity is $\mathcal{O}(n \log n)$ memory and complexity.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
```c++↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
#define pb push_back↵
#define ff first↵
#define ss second↵
↵
typedef long long ll;↵
typedef long double ld;↵
typedef pair<int, int> pii;↵
typedef pair<ll, ll> pll;↵
↵
const int INF = 1e9;↵
const ll LLINF = 1e18;↵
const int MOD = 1e9 + 7;↵
↵
void setIO() {↵
ios_base::sync_with_stdio(0); cin.tie(0);↵
}↵
↵
const int N = 100005;↵
const int M = 100000;↵
↵
stack<int> seg[4*N];↵
int mx[4*N];↵
bool in[N];↵
↵
int val(int x){↵
return (seg[x].size() ? seg[x].top() : -1);↵
}↵
↵
void build(int l = 0, int r = M - 1, int cur = 0){↵
mx[cur] = -1;↵
while(!seg[cur].empty()) seg[cur].pop();↵
if(l == r) return;↵
int mid = (l + r)/2;↵
build(l, mid, cur*2 + 1);↵
build(mid + 1, r, cur*2 + 2);↵
}↵
↵
//1 - insert, 0 - remove↵
void update(int l, int r, int v, int t, int ul = 0, int ur = M - 1, int cur = 0) {↵
if(l <= ul && ur <= r){↵
if(t) seg[cur].push(v);↵
else while(seg[cur].size() && !in[seg[cur].top()]) seg[cur].pop();↵
mx[cur] = val(cur);↵
if(ul != ur) mx[cur] = max({mx[cur], mx[cur*2 + 1], mx[cur*2 + 2]});↵
return;↵
}↵
int mid = (ul + ur)/2;↵
if(l <= mid) update(l, r, v, t, ul, mid, cur*2 + 1);↵
if(r > mid) update(l, r, v, t, mid + 1, ur, cur*2 + 2);↵
mx[cur] = max({val(cur), mx[cur*2 + 1], mx[cur*2 + 2]});↵
}↵
↵
int query(int l, int r, int ul = 0, int ur = M - 1, int cur = 0){↵
if(l <= ul && ur <= r) return mx[cur];↵
if(ur < l || ul > r) return -1;↵
int mid = (ul + ur)/2;↵
return max({query(l, r, ul, mid, cur*2 + 1), query(l, r, mid + 1, ur, cur*2 + 2), val(cur)});↵
}↵
↵
pair<pii, pii> merge(pair<pii, pii> a, pair<pii, pii> b){↵
return {{min(a.ff.ff, b.ff.ff), min(a.ff.ss, b.ff.ss)}, {max(a.ss.ff, b.ss.ff), max(a.ss.ss, b.ss.ss)}};↵
}↵
↵
ll area(pair<pii, pii> x){↵
return (ll)(x.ss.ff - x.ff.ff + 1)*(x.ss.ss - x.ff.ss + 1);↵
}↵
↵
int main(){↵
setIO();↵
int t;↵
cin >> t;↵
for(int tt = 1; tt <= t; tt++){↵
build();↵
int n;↵
cin >> n;↵
ll sum = 0;↵
pair<pii, pii> arr[n];↵
for(int i = 0; i < n; i++){↵
cin >> arr[i].ff.ff >> arr[i].ff.ss >> arr[i].ss.ff >> arr[i].ss.ss;↵
arr[i].ss.ff--, arr[i].ss.ss--;↵
in[i] = true;↵
int nxt = query(arr[i].ff.ss, arr[i].ss.ss);↵
while(nxt != -1 && arr[nxt].ss.ff >= arr[i].ff.ff){↵
in[nxt] = false;↵
update(arr[nxt].ff.ss, arr[nxt].ss.ss, nxt, 0);↵
sum -= area(arr[nxt]);↵
arr[i] = merge(arr[i], arr[nxt]);↵
nxt = query(arr[i].ff.ss, arr[i].ss.ss);↵
}↵
update(arr[i].ff.ss, arr[i].ss.ss, i, 1);↵
sum += area(arr[i]);↵
cout << sum << endl;↵
}↵
}↵
}↵
```↵
</spoiler>↵
↵
↵
### [N. Save the Timeline!](https://codeforces.me/gym/104520/problem/N)↵
↵
<spoiler summary="Hint 1">↵
Consider solving the problem without probabilities or queries. What active nodes matter and which don't? Consider the contribution of each node.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Linearity of Expectation makes dealing with probabilities much easier.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 3">↵
Consider how the contribution of each node in the overall structure of the tree changes in each query.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
First consider the question without the probabilities. Consider the "contribution" of each node to the answer. If a node in the subtree of the current node is active, then this node is essentially not active — we will walk on this node when we walk to the node in the subtree. For every node that is truly active, we must walk from the root to that node and then to the shallowest leaf in its subtree — this way, the "contribution" of each node, if the node is truly active, can be calculated.↵
↵
Now consider with probabilities, but without the queries. Due to linearity of expectation, it suffices to find the sum of the product of the contribution of each node and the probability that it is active.↵
↵
Now consider with the queries. Look at the new contribution of each node — the only nodes that have their contributions change are nodes with query nodes in their subtree. To be more accurate, their contribution is completely removed, and the contributions at each query node must be recalculated. Recalculating contributions at each query node can be easily achieved using prior precomputation on the probability that none of the nodes in its subtree are active. To remove contributions of ancestors of the query nodes, compute prefix sums of the contributions of each nodes' ancestors. Note that by subtracting these, we will double subtract some nodes that are ancestors of several query nodes. To counteract this, sort the query nodes in DFS order and add back the prefix sum of the LCA of adjacent query nodes in DFS order. It is not hard to see that this will remove all double, or more, counting. If a query node is the ancestor of any other query node, we can just ignore the first query node. All of this can be maintained with precomputation. Due to sorting the nodes in the queries, the final complexity is $O(\sum K \log K+N\log N)$.↵
↵
There is one edge case where the only active node is $0$, which can be fixed easily if known. For more details, refer to the solution code.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define ll long long↵
#define N 1001000↵
#define J 21↵
#define inf 0x3f3f3f3f↵
#define MOD 998244353↵
int lg2(int x) {↵
assert(x>=0);↵
if(!x){return 0;}↵
return 32 - __builtin_clz(x) - 1;↵
}↵
int n,q;↵
int arr[N];↵
vector<int>adj[N];↵
int mc[N],cost[N];//↵
int tin[N],tout[N],t;//↵
int sprod[N],def[N];↵
int contribution[N],prefix[N];↵
int rin[N],par[N];//↵
int dep[N];//↵
int fst[N];//↵
int ans=0;↵
vector<int>qq;↵
int rd(int x){return (x>=MOD)?x-MOD:x;}↵
void red(int&x){x=((x>=MOD)?x-MOD:x);}↵
namespace lca{↵
vector<int>tour;↵
int sp[2*N][J];↵
void init(){↵
for(int a=0;a<tour.size();++a)sp[a][0]=tour[a];↵
for(int j=1;j<J;++j){↵
for(int a=0;a+(1<<(j-1))<tour.size();++a){↵
sp[a][j]=((dep[sp[a][j-1]]<dep[sp[a+(1<<(j-1))][j-1]])?sp[a][j-1]:sp[a+(1<<(j-1))][j-1]);↵
}↵
for(int a=max(0,(int)tour.size()-(1<<(j-1)));a<tour.size();++a){↵
sp[a][j]=sp[a][j-1];↵
}↵
}↵
}↵
int query(int u,int v){↵
u=fst[u],v=fst[v];↵
if(u>v)swap(u,v);↵
int l=lg2(v-u+1);↵
return (dep[sp[u][l]]<dep[sp[v-(1<<l)+1][l]])?sp[u][l]:sp[v-(1<<l)+1][l];↵
}↵
}↵
void dfs(int u,int p=-1,int d=0){↵
dep[u]=d;↵
lca::tour.push_back(u);↵
fst[u]=lca::tour.size()-1;↵
tin[u]=++t;rin[tin[u]]=u;par[u]=p;↵
mc[u]=(((int)adj[u].size()-(p!=-1)==0)?0:inf);↵
def[u]=1;↵
for(auto a:adj[u]){↵
if(a==p)continue;↵
dfs(a,u,d+1);↵
mc[u]=min(mc[u],mc[a]+1);↵
def[u]=((ll)def[u]*sprod[a])%MOD;↵
lca::tour.push_back(u);↵
}↵
sprod[u]=((ll)def[u]*rd(1-arr[u]+MOD))%MOD;↵
cost[u]=mc[u]+d;↵
contribution[u]=(ll)cost[u]*def[u]%MOD*arr[u]%MOD;↵
if(u==0)cost[u]=contribution[u]=0;↵
ans=rd(ans+contribution[u]);↵
tout[u]=t;↵
}↵
bool isanc(int u,int v){↵
return tin[u]<=tin[v]&&tout[v]<=tout[u];↵
}↵
void solve(){↵
ans=0;↵
for(int a=0;a<n;++a)adj[a].clear();↵
lca::tour.clear();↵
t=0;↵
cin>>n>>q;↵
lca::tour.reserve(2*n);↵
for(int a=0;a<n;++a){↵
cin>>arr[a];↵
}↵
for(int a=0;a<n-1;++a){↵
int u,v;cin>>u>>v;--u,--v;↵
adj[u].push_back(v);↵
adj[v].push_back(u);↵
}↵
dfs(0);↵
for(int i=1;i<=n;++i){const int&a=rin[i];↵
prefix[a]=rd((a?prefix[par[a]]:0)+contribution[a]);↵
}↵
lca::init();↵
for(int a=0;a<q;++a){↵
int k;cin>>k;qq.resize(k,0);↵
for(int b=0;b<k;++b){↵
cin>>qq[b];--qq[b];↵
}↵
sort(qq.begin(),qq.end(),[&](const int&x,const int&y)->bool{return tin[x]<tin[y];});↵
int pp=-1;↵
int cans=ans;↵
for(int i=0;i<qq.size();++i){↵
if(i<qq.size()-1&&isanc(qq[i],qq[i+1]))continue;↵
cans=((ll)cans+MOD-prefix[qq[i]]+(ll)def[qq[i]]*cost[qq[i]]%MOD)%MOD;↵
if(~pp){↵
cans=rd(cans+prefix[lca::query(qq[i],pp)]);↵
}↵
pp=qq[i];↵
}↵
cout<<cans<<"\n";↵
}↵
}↵
↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [O. Average Range Query Problem](https://codeforces.me/gym/104520/problem/O)↵
↵
↵
<spoiler summary="Solution">↵
In the original problem, solutions using calculus were able to get past the time limit because we didn't set it high enough. The constraints in the gym have $t \le 10$ and $n \le 200$↵
↵
Let's think of a related problem with $n$ ranges of $[0, 1]$. We can expect the min to be $\frac{1}{n+1}$ and the max to be at $\frac{n}{n+1}$. One way to show this is that after choosing $n$ random points on the line segment going from $0$ to $1$, the average space between any two consecutive points, including the points at $0$ and $1$, is $\frac{1}{n+1}$. Therefore the expected value of the lowest point of $n$ equal ranges $[a, b]$ is $\frac{1}{n+1} \cdot (b-a) + a$.↵
↵
Let's calculate $\mathcal{E}(max)$ and $\mathcal{E}(min)$, the expected value of the maximum and the minimum. Then, we can calculate the range by subtracting them. We demonstrate how we can calculate $\mathcal{E}(min)$. We sort the intervals by increasing order of $l_i$.↵
↵
Consider each interval $[l_i, \min(r_i, l_{i+1})]$.↵
↵
By definition, each problem tester range will either completely intersect or not intersect this interval. Using this, we can calculate the probability that $\mathcal{E}(min)$ is in this interval↵
↵
Let $dp_i$ be the probability that $i$ problem testers choose a difficulty for one such interval. Then, if we know for each interval the probabilities that $x$ problem testers chose a difficulty in the interval and the rest chose a difficulty greater than the interval, to calculate the expected value of the minimum we can multiply all the probabilities with the expected value of the minimum if $x$ random values are chosen in the same range, $\frac{1}{n+1} \cdot (b-a) + a$.↵
↵
Complexity of the solution: $\mathcal{O}(TN^{3})$↵
</spoiler>↵
↵
↵
<spoiler summary="Code (codicon)">↵
~~~~~↵
// O(T*N^3)↵
↵
#include <bits/stdc++.h>↵
↵
using namespace std;↵
↵
int main() {↵
cin.tie(0)->sync_with_stdio(0);↵
↵
int n, t; cin >> n >> t;↵
↵
while (t--) {↵
vector<pair<double, double>> I;↵
↵
for (int _ = 0; _ < n; _++) {↵
double l, r; cin >> l >> r;↵
I.emplace_back(l, r);↵
}↵
↵
vector<double> E;↵
↵
for (int _ = 0; _ < 2; _++) {↵
sort(I.begin(), I.end());↵
E.push_back(0);↵
↵
double R = 3501;↵
for (int i = 0; i < n; i++) {↵
auto [l, r] = I[i];↵
↵
if (l >= R) {↵
break;↵
}↵
↵
R = min(R, r);↵
↵
if (l == r) {↵
double prob_after = 1;↵
for (auto [l2, r2] : I) {↵
if (l > l2) {↵
prob_after *= (r2 - max(l2, l)) / (r2 - l2);↵
}↵
}↵
↵
E.back() -= prob_after * l;↵
}↵
else {↵
vector<double> prob = {1};↵
double a = l, b = min(i < n-1 ? I[i + 1].first : 3500, R);↵
↵
for (auto [l2, r2]: I) {↵
double prob_in_ab, prob_after_ab;↵
↵
if (b <= l2) {↵
prob_in_ab = 0;↵
prob_after_ab = 1;↵
} else {↵
prob_in_ab = (b - a) / (r2 - l2);↵
prob_after_ab = (r2 - b) / (r2 - l2);↵
}↵
↵
prob.push_back(0);↵
for (int k = size(prob) - 1; k >= 0; k--) {↵
prob[k] *= prob_after_ab;↵
if (k) {↵
prob[k] += prob_in_ab * prob[k - 1];↵
}↵
}↵
}↵
↵
for (int k = 1; k < size(prob); k++) {↵
E.back() -= prob[k] * (a + (b - a) / (k + 1));↵
}↵
}↵
}↵
↵
for (int i = 0; i < n; i++) {↵
I[i] = {-I[i].second, -I[i].first};↵
}↵
}↵
↵
cout << fixed << setprecision(4) << E[0] + E[1] << '\n';↵
}↵
}↵
↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [P. Omer and Intervals](https://codeforces.me/gym/104520/problem/P)↵
↵
<spoiler summary="Hint 1">↵
Interval endpoint values doesn't matter, only matters their relative order, so we can do cordinate compression.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Each group will have at least one point $x$ which will be contained in every interval of that group.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
We can easily check each pair of points $x$ (group $A$) and $y$ (group $B$) in $\mathcal{O}(n^{3})$ counting the intervals that contains $x$, $y$ and both and distributing them optimally.↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
It's possible to optimize the previous solution to $\mathcal{O}(n^{2})$ with prefix sums and frequency arrays.↵
</spoiler>↵
↵
<spoiler summary="Hint 5">↵
Can we do it better? Only checking every possible $x$? Binary search the answer?↵
</spoiler>↵
↵
<spoiler summary="Hint 6">↵
Can we do it better? Sweepline? Lazy Segment Tree?↵
</spoiler>↵
↵
<spoiler summary="Hint 7">↵
If we want to add intervals to group $A$, with a fixed intersection point $x$ and a fixed size $k$, it will be optimal to take the smallest $k$ intervals by right endpoint that contain $x$.↵
</spoiler>↵
↵
<spoiler summary="Hint 8">↵
With binary search the answer, lazy segment tree to find maximum on range and update with sum on range, sweepline maintaining taken and to take intervals and taking for each index $i$ the k (from the binary search) smallest by right endpoint intervals that contain $i$ we can solve this problem in $\mathcal{O}(n \cdot log^2{(n)})$.↵
</spoiler>↵
↵
<spoiler summary="Hint 9">↵
We can do it better? Removing binary search the answer? Greedy?↵
</spoiler>↵
↵
<spoiler summary="Hint 10">↵
To achieve a $\mathcal{O}(n \cdot log{(n)})$ complexity and solve the problem, we can remove the bianry search and just try to make both groups as close as possible at each step, because size of group A will decrease faster than group B one, because the smallest intervals by right endpoint are in A, and in the sweepline we are moving $x$ to the right.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve this problem, we will do a sweep line, checking every possible intersection point of group $A$, $x$ where $x \leq y$ and $y$ is the intersection point of group $B$.↵
↵
We will maintain the taken intervals on group $A$, the possible to take intervals and a lazy segment tree representing how many non taken intervals contains each point.↵
↵
While size of group $A <$ max on range $[x, 2 \cdot n]$ we will add intervals to group $A$ in asceding order of right endpoint.↵
↵
Note that, after the cordinate compression, the each interval endpoint value will be $\leq 2 \cdot n$.↵
↵
While doing the sweep line we will have to mainting operations of adding new segments, removing other, etc. We can do this with multisets and priority queues.↵
↵
The answer will be the maximum size of group A at any point.↵
↵
Complexity of the solution: $\mathcal{O}(n \cdot log{(n)})$.↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
const int N = 3e5;↵
↵
int t, n, mx, vt[N*2], f[N*2 + 1], c[N*2 + 1], seg[N*8], laz[N*8];↵
pair<int, int> v[N];↵
vector<int> lp[N*2 + 1], mp;↵
↵
bool cmp(const pair<int, int>& a, const pair<int, int>& b) {↵
return a.second < b.second;↵
}↵
↵
void build(int p=1, int l=0, int r=mx) {↵
if (l == r) {↵
seg[p] = c[l];↵
laz[p] = 0;↵
return;↵
}↵
↵
int mid = (l+r)>>1;↵
build(p<<1, l, mid);↵
build((p<<1) + 1, mid+1, r);↵
↵
seg[p] = max(seg[p<<1], seg[(p<<1) + 1]);↵
laz[p] = 0; ↵
}↵
↵
void pushDown(int p) {↵
int h = p<<1;↵
seg[h] += laz[p];↵
laz[h] += laz[p];↵
↵
h++;↵
seg[h] += laz[p];↵
laz[h] += laz[p];↵
↵
laz[p] = 0;↵
}↵
↵
void update(int x, int a, int b, int p=1, int l=0, int r=mx) {↵
if (r < a || b < l) return;↵
if (a <= l && r <= b) {↵
seg[p] += x;↵
laz[p] += x;↵
return;↵
}↵
if (laz[p]) pushDown(p);↵
↵
int mid = (l+r)>>1;↵
update(x, a, b, p<<1, l, mid);↵
update(x, a, b, (p<<1) + 1, mid+1, r);↵
↵
seg[p] = max(seg[p<<1], seg[(p<<1) + 1]);↵
}↵
↵
↵
int query(int a, int b, int p=1, int l=0, int r=mx) {↵
if (a <= l && r <= b) {↵
return seg[p];↵
}↵
if (laz[p]) pushDown(p);↵
↵
int mid = (l+r)>>1;↵
if (b <= mid) {↵
return query(a, b, p<<1, l, mid);↵
}↵
else if (a >= mid+1){↵
return query(a, b, (p<<1) + 1, mid+1, r);↵
}↵
else {↵
return max(query(a, b, p<<1, l, mid), query(a, b, (p<<1) + 1, mid+1, r));↵
}↵
}↵
↵
int solve() {↵
build();↵
priority_queue<int> toTake;↵
multiset<int> taken;↵
int cnt = 0;↵
for (int& i : lp[0]) {↵
if (cnt < query(0, mx)) {↵
cnt++;↵
taken.insert(-v[i].second);↵
update(-1, 0, v[i].second);↵
}↵
else {↵
toTake.push(-v[i].second);↵
}↵
}↵
↵
int ans = 0;↵
for (int i = 0; i < mx; i++) {↵
ans = max(ans, min(cnt, query(i, mx)));↵
↵
for (int& j : lp[i+1]) {↵
toTake.push(-v[j].second);↵
}↵
↵
while (!toTake.empty() && -toTake.top() <= i) toTake.pop();↵
if (taken.find(-i) != taken.end()) {↵
taken.erase(-i);↵
cnt = taken.size();↵
}↵
↵
while (!toTake.empty() && !taken.empty()) {↵
int y = -toTake.top();↵
↵
auto it2 = taken.begin();↵
int y2 = -(*it2);↵
↵
if (y2 <= y) break;↵
↵
update(1, y+1, y2);↵
↵
taken.erase(it2);↵
taken.insert(-y);↵
↵
toTake.pop();↵
toTake.push(-y2);↵
}↵
↵
while (cnt < query(i+1, mx) && !toTake.empty()) {↵
int y = -toTake.top();↵
↵
update(-1, i+1, y);↵
cnt++;↵
↵
taken.insert(-y);↵
toTake.pop();↵
}↵
}↵
ans = max(ans, min(cnt, query(mx, mx)));↵
↵
return ans;↵
}↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> t;↵
while (t--) {↵
cin >> n;↵
↵
mp.clear();↵
for (int i = 0; i <= (n<<1); i++) {↵
f[i] = c[i] = 0;↵
lp[i].clear();↵
}↵
↵
for (int i = 0; i < n; i++) {↵
int x, y;↵
cin >> x >> y;↵
v[i] = make_pair(x, y);↵
vt[i<<1] = x;↵
vt[(i<<1)+1] = y;↵
}↵
sort(v, v+n, cmp);↵
sort(vt, vt + (n<<1));↵
↵
mx = -1;↵
for (int i = 0; i < (n<<1); i++) {↵
if (i && vt[i] == vt[i-1]) continue;↵
mp.push_back(vt[i]);↵
mx++;↵
}↵
↵
for (int i = 0; i < n; i++) {↵
v[i].first = lower_bound(mp.begin(), mp.end(), v[i].first) - mp.begin();↵
v[i].second = lower_bound(mp.begin(), mp.end(), v[i].second) - mp.begin();↵
↵
f[v[i].first]++;↵
f[v[i].second+1]--;↵
↵
lp[v[i].first].push_back(i);↵
}↵
↵
for (int i = 0; i <= mx; i++) {↵
c[i] = (i ? c[i-1] : 0);↵
c[i] += f[i];↵
}↵
↵
cout << solve() << "\n";↵
}↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [Q. Beautiful Matrix Counting](https://codeforces.me/gym/104520/problem/Q)↵
↵
<spoiler summary="Hint 1">↵
Each column has either 1 ones, 2 ones, or no ones. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Column $i-k$ has the same number of ones as column $i$ ($k<i\leq n$). This property allows us to only consider the first $k$ positions, with some caveats. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 3">↵
The issue with counting the total number is the number of columns with only 1 ones. Try enumerating over this value, and solve the third subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 4">↵
When $k$ does not divide $n$, then we can split this into two cases: the first $n-k\cdot\lfloor n/k\rfloor$, and the other elements.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 1.5">↵
Enumerate over the number of one columns, and the number of one columns in the remainder. This gives us a solution to the second subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 1.6">↵
Can we optimize this further using FFT.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 1">↵
Define $m=\lfloor n/k\rfloor$ and $r=n-km$. If there are $x$ one-columns in each submatrix, and $y$ of those columns are in the remainder, the answer is given by ↵
↵
$$↵
\begin{align*}↵
{r\choose y}{k-r\choose x-y}{k-x\choose\frac{s-x}{2}}\cdot 2^{xm+y}.↵
\end{align*}↵
$$↵
↵
Thus, we can solve the first subtask by precomputing factorials and inverse factorials and directly summing over all $x$ and $y$:↵
↵
$$↵
\begin{align*}↵
\sum_{x=0}^{\min(k,s)}\sum_{y=0}^{\min(x,r)}{r\choose y}{k-r\choose x-y}{k-x\choose\frac{s-x}{2}}\cdot 2^{xm+y}.↵
\end{align*}↵
$$↵
↵
Expanding the binomial coefficients and simplifying, we get the expression↵
↵
$$↵
\begin{align*}↵
r!(k-r)!\cdot\sum_{x=0}^{\min(k,r)} {k-x\choose\frac{s-x}{2}}2^{xm}\cdot \left(\sum_{y=0}^{\min(x,r)} \frac{2^y}{y!(r-y)!((k-r)-(x-y))!}\right)↵
\end{align*}↵
$$↵
↵
In particular, define $f(x)$ to be ↵
↵
$$↵
\begin{align*}↵
f(x)=\sum_{y=0}^{\min(x,r)} \frac{2^y}{y!(r-y)!((k-r)-(x-y))!}.↵
\end{align*}↵
$$↵
↵
It's clear that $f(x)$ is the convolution of $g(x)$ and $h(x)$, defined as↵
↵
$$↵
\begin{align*}↵
g(x)=\frac{2^x}{x!(r-x)!}\cdot[x\leq r]↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
h(x)=\frac{1}{x!(k-r-x)!}↵
\end{align*}↵
$$↵
↵
We can thus evaluate $f$ in $\mathcal{O}(n\log n)$ time, and directly evaluate the sum for the answer.↵
↵
**Time Complexity:** $\mathcal{O}(n\log n)$↵
</spoiler>↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
#include <atcoder/modint>↵
#include <atcoder/convolution>↵
↵
using mint = atcoder::modint998244353;↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
long long n, k, s;↵
std::cin >> n >> k >> s;↵
std::vector<mint> pp(k + 1), ss(k + 1);↵
↵
pp[0] = 1;↵
for (int i = 1; i <= k; i++)↵
pp[i] = pp[i - 1] * atcoder::modint::raw(i);↵
ss[k] = 1 / pp[k];↵
for (int i = k - 1; i >= 0; i--)↵
ss[i] = ss[i + 1] * atcoder::modint::raw(i + 1);↵
auto cc = [&](int nn, int kk) { return nn < kk ? 0 : pp[nn] * ss[kk] * ss[nn - kk]; };↵
↵
int r = n % k;↵
long long m = n / k;↵
↵
mint p2 = 1;↵
std::vector<mint> g(r + 1), h(std::min(k, s) + 1);↵
for (int i = 0; i <= r; i++) {↵
g[i] = p2 * ss[i] * ss[r - i];↵
p2 *= atcoder::modint::raw(2);↵
}↵
for (int i = 0; i <= std::min(k, s); i++) {↵
h[i] = ss[i] * (k - r - i >= 0 ? ss[k - r - i] : 0);↵
}↵
↵
auto f = atcoder::convolution(g, h);↵
f.resize(std::min(k, s) + 1, 0);↵
mint ans = 0;↵
for (int x = s % 2; x <= std::min(k, s); x += 2)↵
ans += cc(k - x, (s - x) / 2) * mint(2).pow(x * m) * f[x];↵
ans *= pp[r] * pp[k - r];↵
std::cout << ans << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2.5">↵
Consider a quadratic-time DP solution to the second subtask. Because the transitions are relatively simple, can we optimize this further?↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 2">↵
Note: The problem author does not fully understand this solution.↵
↵
In the case where $n$ is divisible by $k$, the answer is the coefficient of $x^s$ in the expansion of $(1 + 2^{\frac{n}{k}}x + x^2)^k$. This is derived directly from the knapsack-like recurrence, or can be derived indirectly through generating functions. In any case, we can use FFT to evaluate this polynomial in $\mathcal{O}(n\log^2 n)$ time using binary exponentiation, which may or may not pass the time limit.↵
↵
To optimize this, there are a couple of options. Firstly, we can use matrix multiplication to perform fast expansion of trinomials, as described [here](https://cs.uwaterloo.ca/journals/JIS/VOL9/Noe/noe35.pdf). Secondly, note that for a formal power series $f$, we have $s^n=e^{\ln s^n}=e^{n\ln s}$. The logarithm and exponential of a FPS can be found using standard methods in $\mathcal{O}(n\log n)$ time, so this solution also runs in $\mathcal{O}(n\log n)$. ↵
↵
To adapt this to the full problem, it sufficies to split the recurrence into two parts, the remainder and the non-remainder, and multiply the answers. ↵
↵
**Time Complexity:** $\mathcal{O}(n\log n)$↵
</spoiler>↵
↵
↵
### [R. Oil Fields](https://codeforces.me/gym/104520/problem/R)↵
↵
<spoiler summary="Hint 1.1">↵
Constraints seem to suggest an $\mathcal{O}(n^3)$ solution. Try to reformulate this problem as an all-pairs shortest path problem.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 1">↵
Let the points be denoted by $a_1, a_2, \dots, a_n$, sorted by $x$ coordinate. Without loss of generality, assume that all points have distinct $x$ coordinates. Define $p$ and $s$ such that ↵
↵
$$↵
\begin{align*}↵
p_i=\sum_{j=1}^{i-1}w_j↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
s_i=\sum_{j=i+1}^{n}w_j↵
\end{align*}↵
$$↵
↵
In other words, $p_i$ is the sum of weights of points to the left of point $i$ and $s_i$ is the sum of weights of points to the right of point $i$. Define $h_{i,j}$ as ↵
↵
$$↵
h_{i,j}=\begin{cases}↵
i<j&\sum_{k=i+1}^{j-1}w_k\cdot[a_k\text{ is above }\overline{a_ia_j}]\\↵
i>j&\sum_{k=i+1}^{j-1}w_k\cdot[a_k\text{ is below }\overline{a_ja_i}]↵
\end{cases}↵
$$↵
↵
Create a graph of $n$ points numbered $1,2,\dots,n$, where the edge from $i$ to $j$ has weight $m\cdot \text{distance}(a_i,a_j)+h_{i,j}$. Let $d_{i,j}$ denote the shortest path between nodes $i$ and $j$ in the graph, and let $W$ be the sum of weights over all points. Then, the maximum profit Alice can make is ↵
↵
$$↵
\max_{i,j} (W-p_i-s_j-d_{i,j}-d_{j,i}).↵
$$↵
↵
The matrix $h$ can be naively calculated in $\mathcal{O}(n^3)$ (which can be further optimized to $\mathcal{O}(n^2)$), and the all-pairs shortest path can be calculated using the Floyd-Warshall algorithm in $\mathcal{O}(n^3)$. ↵
↵
**Time Complexity:** $\mathcal{O}(n^3)$↵
</spoiler>↵
↵
<spoiler summary="Code 1">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
↵
template<typename T>↵
struct point {↵
T x, y;↵
↵
point &operator+=(const point &rhs) {↵
x += rhs.x, y += rhs.y;↵
return *this;↵
}↵
↵
point &operator-=(const point &rhs) {↵
x -= rhs.x, y -= rhs.y;↵
return *this;↵
}↵
};↵
↵
template<typename T>↵
point<T> operator+(point<T> lhs, const point<T> &rhs) { return lhs += rhs; }↵
↵
template<typename T>↵
point<T> operator-(point<T> lhs, const point<T> &rhs) { return lhs -= rhs; }↵
↵
template<typename T>↵
T operator*(const point<T> &lhs, const point<T> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y; }↵
↵
template<typename T>↵
T operator^(const point<T> &lhs, const point<T> &rhs) { return lhs.x * rhs.y - lhs.y * rhs.x; }↵
↵
template<typename T> inline T sq(T x) { return x * x; }↵
↵
template<typename T>↵
double dis(const point<T> &lhs, const point<T> &rhs) { return sqrt(sq((double)lhs.x-rhs.x) + sq((double)lhs.y-rhs.y)); }↵
↵
int main() {↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
int n;↵
long long m, c;↵
std::cin >> n >> m >> c;↵
long long v = 0;↵
std::vector<std::pair<point<long long>, int>> a(n);↵
for (int i = 0; i < n; i++) {↵
long long x, y, w;↵
std::cin >> x >> y >> w;↵
v += w;↵
a[i] = {point<long long>{x, y}, w};↵
}↵
↵
std::sort(a.begin(), a.end(), [](auto &&a, auto &&b) {↵
return a.first.x < b.first.x || (a.first.x == b.first.x && a.first.y < b.first.y);↵
});↵
↵
std::vector<long long> ps(n), ss(n);↵
ps[0] = a[0].second, ss[n - 1] = a[n - 1].second;↵
for (int i = 1; i < n; i++)↵
ps[i] = ps[i - 1] + a[i].second;↵
for (int i = n - 2; i >= 0; i--)↵
ss[i] = ss[i + 1] + a[i].second;↵
↵
std::vector gk(n, std::vector<long long>(n, 0));↵
for (int i = 0; i < n; i++) {↵
for (int j = i + 1; j < n; j++) {↵
auto s = a[i].first, t = a[j].first;↵
double m = (double)(s.y - t.y) / (s.x - t.x);↵
for (int k = i + 1; k < j; k++) {↵
auto u = a[k].first;↵
if (u.y - s.y > m * (u.x - s.x)) {↵
gk[i][j] += a[k].second;↵
} else {↵
gk[j][i] += a[k].second;↵
}↵
}↵
}↵
}↵
↵
std::vector g(n, std::vector<double>(n, 0));↵
for (int i = 0; i < n; i++) ↵
for (int j = 0; j < n; j++)↵
g[i][j] = gk[i][j] + m * dis(a[i].first, a[j].first);↵
for (int k = 0; k < n; k++)↵
for (int i = 0; i < n; i++)↵
for (int j = 0; j < n; j++)↵
g[i][j] = std::min(g[i][j], g[i][k] + g[k][j]);↵
↵
double ans = -c;↵
for (int i = 0; i < n; i++) {↵
for (int j = i; j < n; j++) {↵
auto px = i == 0 ? 0 : ps[i - 1];↵
auto sx = j == n - 1 ? 0 : ss[j + 1];↵
ans = std::max(ans, v - px - sx - g[i][j] - g[j][i]);↵
}↵
}↵
ans -= c;↵
std::cout << std::fixed << std::setprecision(6) << ans << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
<spoiler summary="Hint 2.1">↵
Try doing a DP in angular-sweep order for each point. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2.2">↵
Optimize the transition from $\mathcal{O}(n)$ to $\mathcal{O}(1)$ by using a push-forward DP and a second angular sweep.↵
</spoiler>↵
↵
<spoiler summary="Solution 2">↵
First, we describe a way to efficiently handle triangle queries. This is a necessary step for this solution.↵
↵
Let $\tau_{i,j,k}$ denote the sum of weights of points contained in $\triangle a_ia_ja_k$. Without loss of generality, assume that $a_i, a_j, a_k$ are in clockwise order. A triangle query can be decomposed into a combination of several halfplane and sector queries (described below). A halfplane query $\eta_{i,j}$ is defined as the sum of weights of points to the right of $\overleftrightarrow{a_ia_j}$. A sector query $\sigma_{i,j,k}$ is defined as the sum of weights of points located in the sector defined by rays $\overrightarrow{a_ja_i}$ and $\overrightarrow{a_ja_k}$. Let $W=\sum_{i=1}^n w_i$. We have↵
↵
$$↵
\tau_{i,j,k}=\sigma_{i,j,k}+\sigma_{k,i,j}+\sigma_{j,k,i}-\eta_{i,j}-\eta_{j,k}-\eta_{k,i}+W.↵
$$↵
↵
Consider enumerating each point $a_i$ and sorting all other points about $a_i$. This can be done for all points in $\mathcal{O}(n^2)$ or $\mathcal{O}(n^2\log n)$. We can calculate $\eta_{i,j}$ for all points $j$ with a simple radial sweep. For angular queries, let $\rho_{i,j}$ be the sum of weights of points in the sector between the vertical ray starting at $a_i$ and $\overrightarrow{a_ia_j}$. Then, we have ↵
↵
$$↵
\sigma_{i,j,k}=\lvert \rho_{j,k}-\rho_{j,i} \rvert.↵
$$↵
↵
Thus, triangle queries can be preprocessed in $\tilde{\mathcal{O}}(n^2)$ and done in constant time.↵
↵
Consider enumerating the lowest point in the hull, and let this point be $p$. Let the points with strictly greater $y$ coordinate be denoted by $a_1, a_2, \dots, a_k$, sorted by angle from $p$. Define states $f_{i,j}$ with $j<i$ as the maximum cost convex polygon that contains the segments $\overline{a_ia_j}$ and $\overline{a_ip}$. Transitions can be handled as follows:↵
↵
$$↵
\begin{align*}↵
f_{i,j}=\max_{k | \overline{a_ja_i}\text{ turns left from }\overline{a_ka_j}} f_{j,k} + \tau_{p,i,j} - m \cdot \text{distance}(a_i,a_j).↵
\end{align*}↵
$$↵
↵
This can be optimized to $\mathcal{O}(n^3)$ by using a push forward DP, rotating around the point $j$. The details are left as an exercise to the reader. ↵
↵
**Time Complexity:** $\mathcal{O}(n^3)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code 2">↵
~~~~~↵
//A solution is omitted :D↵
~~~~~↵
</spoiler>↵
↵
↵
### [S. Farming Negative Karma](https://codeforces.me/gym/104520/problem/S)↵
↵
<spoiler summary="Hint 1">↵
The problem is equivalent to counting the number of cells that have been tilled exactly some amount of times and subtracting the amount of cells that have occurred $< k$ times from the size of the query rectangle.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
We can sweep on the x-axis. At each position on the x-axis, we can query the number of cells that have been tiled $< k$ times by storing the $k$ smallest values as well as how many times they occur in each node of a segment tree. The update rectangles can be decomposed into range add for the left edge and range subtract for the right edge. How to do queries?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Lets say that each different position on the x-axis is a new version of our segment tree. Each query rectangle can be separated into two queries for the left and right edge where we want to find the number of time a value occurs in a range over all versions of the segment tree, ie. a historic sum.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
This problem was inspired by [BPC 1 S6 — Painters](https://dmoj.ca/problem/bpc1s6)↵
↵
For each node on the segment tree, we will maintain the $k$ smallest values and the number of times they occur, as well as the historic sum for each of the relevant $k$ values. In each lazy tag, we will store the $k$ smallest values it reaches as well as the amount of time spent at that value and the final value of the tag. This allows us to maintain a segment tree where each update creates a new version of the segment tree. However, there can be multiple updates per version, so we will either split each x position into multiple range updates such that each index is covered by exactly one update (this is a bit slow and also untested so it may or may not pass the time limit) or we can maintain the most recent version that each node was updated and consider that during the propagation. The time complexity is $\mathcal{O}(n \cdot k^2 \log n)$ and memory complexity is $\mathcal{O}(n \cdot k)$.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
Let's be honest if you're trying to solve this you already have coach mode so just go use that instead.↵
</spoiler>↵
↵
Happy Upsolving! As a hint towards a possible problem on the next contest: ↵
↵
<spoiler summary="Solution"> ![kelp](/predownloaded/05/a7/05a719e9b55e50a2339584b16d2840076b2d9860.png) </spoiler>↵
↵
↵
### [A. Who is cooking?](https://codeforces.me/gym/104520/problem/A)↵
↵
<spoiler summary="Solution">↵
danx↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
print("Esomer")↵
~~~~~↵
</spoiler>↵
↵
### [B. Restaurant Sorting](https://codeforces.me/gym/104520/problem/B)↵
↵
<spoiler summary="Solution">↵
↵
The answer is $n - $ the longest prefix of the array where all the elements are in the right position. All the elements are in the do not need to the popped out of the stack, and you must pop out at least as many as to fix the first mistake. Final complexity $O(n \log n)$.↵
↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <vector>↵
#include <algorithm>↵
↵
using namespace std;↵
↵
int main(){↵
cin.tie(0) -> sync_with_stdio(0);↵
int T; ↵
cin >> T;↵
while(T--){↵
int n;↵
cin >> n;↵
vector<int> arr(n);↵
for(int& x : arr)↵
cin >> x;↵
vector<int> arr2 = arr;↵
sort(arr2.begin(), arr2.end());↵
int cnt = 0;↵
for(int i = 0; i<n; i++){↵
if(arr[i] == arr2[i]){↵
cnt++;↵
}else{↵
break ;↵
}↵
}↵
cout << n - cnt << "\n";↵
}↵
return 3366^3366;↵
}↵
~~~~~↵
</spoiler>↵
↵
### [C. Largest Palindromic Subsequence](https://codeforces.me/gym/104520/problem/C)↵
↵
<spoiler summary="Hint 1">↵
Look carefully at the samples. Do you notice a pattern? Try manually finding the answer for other strings.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
**Lemma 1:** The largest palindromic subsequence is the largest character in the string, repeated the number of times it appears in the string.↵
↵
Proof: Suppose the largest character appears $k$ times in total. Suppose there is some palindromic substring lexicographically greater than the string described above. Then, in order to preserve the palindrome property, the position of the first occurrence of a character that is not the greatest character must be less than $k/2$. Thus, the string formed by the greatest characters repeated the maximal number of times is lexicographically greater. $\blacksquare$↵
↵
By Lemma 1, we can simply perform two sweeps over the array to find the maximum character and its number of occurrences.↵
↵
**Time Complexity:** $\mathcal{O}(\lvert s\rvert)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
import sys↵
input = sys.stdin.readline↵
↵
for tc in range(int(input())):↵
s = input()↵
print(max(s)*s.count(max(s)))↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [D. Yet Another Math Query Problem](https://codeforces.me/gym/104520/problem/D)↵
↵
<spoiler summary="Hint 1">↵
Note that $a \leq b$.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Imagine you have a good pair $(c, d)$. What will the other good pairs look like?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
$a + b + |a - b| = a + b + (b - a) = 2*b.$↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
What happens if $x$ is odd or if $x/2$ is not on the range $[l, r]$?↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve each query, we will check if $x$ it's odd and if also if $x/2$ is contained in the range $[l, r]$.↵
↵
If not, the answer will be $0$, otherwise it will be $x/2 - l + 1$.↵
↵
Complexity of the solution: $\mathcal{O}(q)$.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <iostream>↵
using namespace std;↵
↵
#define ll long long↵
↵
int q;↵
ll l, r, x, ans;↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> q;↵
while (q--) {↵
cin >> l >> r >> x;↵
if ((x&1) || (x>>1) < l || (x>>1) > r) ans = 0;↵
else ans = (x>>1) - l + 1;↵
cout << ans << "\n";↵
}↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [E. Evil Problemsetters](https://codeforces.me/gym/104520/problem/E)↵
↵
<spoiler summary="Hint 1">↵
Think about how each clue discards some values, and how to use them to discard every value except $x$.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Can you use the prime decomposition of $x$ to know the answer? Try to think of a construction.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
We need to keep discarding possible values of $x$ until there's only one possible $x$.With this in mind, there are two methods for discarding elements:↵
↵
↵
1. A clue with $c = 1$ and $|a-b| = d$ discards every value $y$ such that $y \nmid d$.↵
2. A clue with $c = 0$ and $|a-b| = d$ discards every value $y$ such that $y \mid d$.↵
↵
↵
The first observation is that we need at least one clue with $c = 1$, because we can't discard every other element otherwise, as there are infinite primes.↵
↵
Additionally, we need to discard all the divisors of $x$ except $x$ itself. Let $k$ be the number of different prime factors of $x$, then the minimum number of clues needed to discard all the divisors is $k$. It can't be less than $k$, because to discard each number of the form $\frac{x}{p}$ where $p$ is some prime in the prime decomposition of $x$, we need to directly use method $2$ with them. We can't discard two numbers of this form at once, because then the product would be multiple of $x$.↵
↵
So, the minimum number of clues is $k$ and we discard all the numbers of the form $\frac{x}{p}$ and its divisors. Furthermore, this is enough to discard all the divisors of $x$.↵
↵
<spoiler summary="Proof">↵
If there's a number $y$ different than $x$ such that $y$ divides $x$, then the set of primes that divide $y$ is a subset of the set of primes that divide $x$. ↵
↵
↵
- If it is a proper subset, let $p$ be the prime that divides $x$ but doesn't divide $y$. When considering $\frac{x}{p}$ we are already discarding $y$. ↵
- If it is the same set, at least one of the primes in $y$ must be raised to a lower exponent than in $x$. Let one of those primes be $p$, then we are also discarding $y$ when considering $\frac{x}{p}$.↵
↵
↵
</spoiler>↵
↵
With this, we have proven a lower bound of $k + 1$ ($k$ is defined above) clues for the answer, and we've proved that it is possible to achieve this lower bound by giving a clue with $c = 1$ and $|a - b| = x$, and then discarding all the divisors of $x$ different than $x$ by giving clues with $c = 0$ and $|a - b| = \frac{x}{p}$, where $p$ takes the value of each different prime that divides $x$. Hence, this is always an optimal answer.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include<bits/stdc++.h>↵
↵
using namespace std;↵
↵
#define endl "\n"↵
↵
typedef long long ll;↵
typedef long double ld;↵
↵
const int MOD = 1e9 + 7;↵
↵
int lw[(int)1e6+1]; //Lowest prime factor that divides each index.↵
↵
void solve(){↵
int x; cin >> x;↵
int org = x;↵
vector<int> ans;↵
while(x > 1){↵
int val = lw[x];↵
ans.push_back(val);↵
while(x % val == 0){↵
x /= val;↵
}↵
}↵
cout << (int)ans.size() + 1 << endl;↵
cout << 1 << " " << 1 + org << " " << 1 << endl;↵
for(int y : ans) cout << 1 << " " << 1 + org/y << " " << 0 << endl;↵
}↵
↵
int main(){↵
ios_base::sync_with_stdio(0);↵
cin.tie(0);↵
↵
for(int i = 1; i <= 1e6; i++) lw[i] = i;↵
for(ll i = 2; i <= 1e6; i++){↵
if(lw[i] != i) continue;↵
for(ll j = i*i; j <= 1e6; j += i){↵
lw[j] = min(lw[j], (int)i);↵
}↵
}↵
↵
int tt; cin >> tt;↵
//~ int tt = 1; ↵
for(int t = 1; t <= tt; t++){↵
solve();↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [F. Maximum Trust](https://codeforces.me/gym/104520/problem/F)↵
↵
<spoiler summary="Solution">↵
One method that you might have tried at first is going through each tester and simply keeping track of the maximum possible trust achievable. However this solution alone is not sufficient since sometimes it's more optimal to actually take on a non-positive trust score instead of a positive one. This is because multiplying two negatives creates a positive, and having a large magnitude negative trust value can potentially produce a large positive value later. ↵
↵
This observation motivates the idea of maintaining two values instead of just one when going through the testers: the maximum possible trust, and the minimum possible (which is always non-positive since $0$ trust is always achievable). At each tester you compute all possible trust scores you can achieve using the two maintained values and either multiplying or adding the next one, and recalculate the new minimum and maximum scores. It can be shown that keeping track of just those two values is sufficient, as it is never possible to get a strictly higher/smaller score from a non-maximum/minimum score.↵
↵
Complexity of the solution: $\mathcal{O}(N)$.↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <algorithm>↵
using namespace std;↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int T;↵
cin >> T;↵
↵
while (T--) {↵
int N;↵
cin >> N;↵
↵
long long pos = 0;↵
long long neg = 0;↵
for (int i = 0; i < N; i++) {↵
long long a;↵
cin >> a;↵
↵
//prev pos and neg↵
long long pos_temp = pos;↵
long long neg_temp = neg;↵
↵
pos = max(max(0ll, pos_temp + a), pos_temp * a);↵
pos = max(max(pos, neg_temp + a), neg_temp * a);↵
neg = min(min(0ll, neg_temp + a), neg_temp * a);↵
neg = min(min(neg, pos_temp + a), pos_temp * a);↵
}↵
↵
cout << pos << "\n";↵
}↵
return 0;↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [G. Maximum Xor](https://codeforces.me/gym/104520/problem/G)↵
↵
↵
<spoiler summary="Solution">↵
Note that when evaluating $(v+x)\oplus (v+y)$, the highest bit matters the most — that is to say, when solving the problem we should enumerate from the most significant bit to the lowest bit. Maximizing the highest bit we can takes precedence over maximizing any bits that come after it, as the high bit alone contributes more to our answer than all bits after it combined. So now we numerate the bits from high to low, considering if we can force it to be on in the final result.↵
↵
Let the current considered bit be the $k$th bit. If $2^k\leq x,y$ then it is impossible for this bit to be on — subtract $2^k$ from both $x$ and $y$ and continue on to the lower bits. Also, WLOG $x>y$. If $x=y$, the result is simply $0$. If $a+x\geq 2^k$ but $a+y<2^k$ for some $0\leq a<z$, then it is possible to force the current bit on. This range exists if $x\neq y$ and $z-1+x\geq 2^k$. The main observation afterwards is that by setting $a$ to be in the range where $a+x\geq 2^k$ but $a+y<2^k$ is essentially the same as adding some prefix value to both $x$ and $y$, and then decreasing the current value of $z$. To illustrate, the two earlier inequalities turn into $2^k-x\leq a<2^k-y$. These are now both required in order to maximize our answer. We can simulate the lower bound by setting $x:=x+max(0,(2^k-x))$ and $y:=y+max(0,(2^k-x))$, and then setting $z:=min(z-max(0,(2^k-x)),(2^k-y)-max(0,(2^k-x)))$. Then, subtract $2^k$ from $x$ as this bit is now irrelevant, and it is back to the same problem, only with lower bits. Note that the $max(0,(2^k-x))$ is necessary, as it is forced that $v\geq 0$.↵
↵
↵
For the full details, one can refer to the solution code. The final complexity is $O(\log max(x,y,z))$ per case.↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define B 30↵
int x,y,z;↵
void solve(){↵
cin>>x>>y>>z;↵
//(v+x)^(v+y) s.t. 0<=v<z↵
if(x==y){↵
cout<<0<<endl;↵
return;↵
}↵
int cans=0;↵
for(int a=B-1;a>=0;--a){↵
if(x==y)break;↵
if(x>=(1<<a)&&y>=(1<<a)){↵
x-=(1<<a);↵
y-=(1<<a);↵
continue;↵
}↵
if(z-1+x>=(1<<a)||z-1+y>=(1<<a)){↵
if(x<y)swap(x,y);↵
z=min(z-max(0,((1<<a)-x)),((1<<a)-y)-max(0,((1<<a)-x)));↵
int av=max(0,(1<<a)-x);↵
x+=av;↵
y+=av;↵
x-=(1<<a);↵
cans+=(1<<a);↵
}↵
}↵
cout<<cans<<endl;↵
}↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
### [H. Permutator](https://codeforces.me/gym/104520/problem/H)↵
↵
<spoiler summary="Hint 1">↵
Note that every index $i$ contributes independently to the answer.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
In how many subarrays is contained the index $i$?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Each index $i$ contributes $a_{i} \cdot b_{i} \cdot sub_{i}$ where $sub_{i}$ is the number of subarrays containing $i$.↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
As $a$ is fixed, we know how many times each index $i$ contributes to the answer except for the values in $b$, and it's obvious that it's always better to match the highest contributing values in $a$ with the lowest ones in $b$ to minimize the answer.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve this problem we will create a new array $a'$ where $a'[i] = a[i] * (i+1) * (n-i)$ where $(i+1) * (n-i)$ is the number of subarrays that contains the index $i$, and we will sort it in ascending order.↵
↵
Then we will sort $b$ in descending order and the answer will be $\sum_{i = 0}^{n-1} a'_{i} \cdot b_{i}$↵
↵
For formalism, this can be proved with the arrangement inequality↵
↵
Complexity of the solution: $\mathcal{O}(n \cdot log{(n)})$.↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
#define int long long↵
↵
const int N = 1e5;↵
↵
int n, ans, a[N], b[N];↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> n;↵
for (int i = 0; i < n; i++) {↵
cin >> a[i];↵
a[i] *= (i+1)*(n-i);↵
}↵
sort(a, a+n);↵
↵
for (int i = 0; i < n; i++) {↵
cin >> b[i];↵
}↵
sort(b, b+n);↵
↵
ans = 0;↵
for (int i = 0; i < n; i++) {↵
ans += a[i]*b[n-i-1];↵
}↵
cout << ans << "\n";↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [I. Counting Palindromic Sequences](https://codeforces.me/gym/104520/problem/I)↵
↵
<spoiler summary="Hint 1">↵
Try to formulate an $\mathcal{O}(n^2)$ DP for the first subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Optimize the transitions from $\mathcal{O}(n)$ to $\mathcal{O}(1)$.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
Let $f_i$ denote the number of palindromic subsequences with sum $i$ that do not contain any instances of $k$, and let $g_i$ denote the number of palindromic subsequences with sum $i$ that contains at least one instance of $k$. Then, we have the recurrences↵
↵
$$↵
\begin{align*}↵
f_i=\sum_{x=1}^{\lfloor i/2\rfloor} f_{i-2x} - f_{i-2k}↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
g_i=\sum_{x=1}^{\lfloor i/2\rfloor} g_{i-2x} + f_{i-2k}↵
\end{align*}↵
$$↵
↵
Note that for convenience, we define $f_i=g_i=0$ for $i<0$. In other words, each transition is the sum of all the even or all the odd indicies before $i$ of $f$ or $g$. We can use counter variables to avoid recalculating this sum at every iteration, making transitions $\mathcal{O}(1)$.↵
↵
**Time Complexity:** $\mathcal{O}(n)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
↵
int Z;↵
↵
class mint {↵
long long x;↵
↵
public:↵
mint(long long _x) : x(_x % Z) {}↵
↵
mint &operator+=(const mint &n) { ↵
x += n.x;↵
if (x >= Z)↵
x -= Z;↵
return *this;↵
}↵
↵
mint &operator-=(const mint &n) {↵
x -= n.x;↵
if (x < 0)↵
x += Z;↵
return *this;↵
}↵
↵
mint &operator++() {↵
x++;↵
if (x == Z)↵
x = 0;↵
return *this;↵
}↵
↵
int val() { return x; }↵
};↵
↵
mint operator+(mint a, const mint &b) { return a += b; }↵
mint operator-(mint a, const mint &b) { return a -= b; }↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
int n, k;↵
std::cin >> n >> k >> Z;↵
std::vector<mint> f(n+1, 0), g(n+1, 0);↵
if (k == 1) ↵
f[0] = g[1] = 1;↵
else ↵
f[0] = f[1] = 1;↵
mint fs0 = f[0], fs1 = f[1];↵
mint gs0 = g[0], gs1 = g[1];↵
for (int i = 2; i <= n; i++) {↵
if (i == k)↵
++g[i];↵
else↵
++f[i];↵
f[i] += fs0 - (i < 2 * k ? 0 : f[i - 2 * k]);↵
g[i] += gs0 + (i < 2 * k ? 0 : f[i - 2 * k]);↵
fs0 += f[i], gs0 += g[i];↵
std::swap(fs0, fs1);↵
std::swap(gs0, gs1);↵
}↵
↵
std::cout << g[n].val() << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [J. TeamsCode Meetings](https://codeforces.me/gym/104520/problem/J)↵
↵
<spoiler summary="Hint 1">↵
Try to construct a graph that represents this problem, how would edges work? What is a common property of all meetings that Bossologist must attend?↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
First come up with a solution for just one day of the week, to make it easier pretend the problem just asked you about day $1$.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Think about why it matters which day of the week the idea gets created, and how exactly it affects which meetings are productive.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
The structure of this problem strongly suggests that the solution has something to do with a graph, so let's go ahead and try using one. We let the days of the week be nodes, and we say an edge exists between two nodes if there is a setter that goes to both days' meetings. ↵
↵
From here it might be tempting to presume that the answer is just the number of connected components in the graph, but it's important to realize that the edges are actually _directed_ rather than undirected. This is because the nodes represent points in time rather than space, and since time can only flow in one direction a problem setter can't retroactively make a previous meeting productive. The nuance here is that there are still two directed edges between any two days $a$ and $b$ which a problem setter attends. This may seem contradictory to the previous statement, but it's actually because there's one directed edge between $a$ and $b$, and another between day $b$ and day $a$ of the **following week.** So it might be more useful to visualize a graph with $2N$ nodes labeled $1, 2,\ldots, 2N$, the second $N$ nodes representing days in the week after an idea is created.↵
↵
For now let's just solve for when the problem idea is created on day $1$. Note that since we only care about meetings within $N$ days of the idea's conception, we just need to worry about edges starting and ending within the first $N$ nodes. Using this graph representation, it becomes clear that Bossologist must go to all meetings with in-degrees of $0$. ↵
↵
Now the question is how to efficiently compute the answer for the rest of the days of the week. Note that we always only care about edges that start or end within $N$ days of the idea's conception, so the only difference between an idea being created on day $i + 1$ rather than $i$ are the two nodes at the end. More specifically, we delete the edges coming out of node $i$ and add in the edges going to node $N + i$. Using this method we can easily maintain the number of meetings with a $0$ in-degree, and thus we have our solution.↵
↵
If you let $P$ be the sum of all $p_i$, the time complexity is $\mathcal{O}(N + P)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include <iostream>↵
#include <map>↵
#include <vector>↵
#include <memory>↵
using namespace std;↵
#define MAXM 100001↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int N, M;↵
cin >> N >> M;↵
↵
//for every start point have a list of all end points↵
vector<int> ep[MAXM];↵
vector<int> deg(N, -1);↵
↵
for (int i = 0; i < M; i++)↵
{↵
int p;↵
cin >> p;↵
↵
vector<int> d(p);↵
for (int j = 0; j < p; j++)↵
{↵
cin >> d[j];↵
d[j]--; ↵
if (deg[d[j]] == -1) deg[d[j]] = 0;↵
↵
if (j) ↵
{↵
ep[d[j - 1]].push_back(d[j]);↵
deg[d[j]]++;↵
}↵
}↵
if (p > 1)↵
{↵
ep[d[p - 1]].push_back(d[0]);↵
deg[d[0]]++;↵
}↵
}↵
↵
int ans = 0;↵
↵
//get init answer↵
for (int i = 0; i < N; i++)↵
if (deg[i] == 0) ans++;↵
↵
vector<int> deg_temp = deg;↵
↵
//get answer for 0↵
for (int i = 0; i < N; i++)↵
{↵
for (int j : ep[i])↵
{↵
if (j < i) deg_temp[j]--;↵
if (deg_temp[j] == 0) ans++;↵
}↵
}↵
↵
cout << ans << "\n";↵
↵
for (int i = 1; i < N; i++)↵
{↵
if (deg_temp[i - 1] == 0 && deg[i - 1] != 0) ans--;↵
deg_temp[i - 1] = deg[i - 1];↵
↵
for (int j : ep[i - 1])↵
{↵
deg_temp[j]--;↵
if (deg_temp[j] == 0) ans++;↵
}↵
↵
cout << ans << "\n";↵
}↵
return 0;↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
### [K. Med and Mex](https://codeforces.me/gym/104520/problem/K)↵
↵
<spoiler summary="Hint 1">↵
When can a good subarray have a value of $x$?↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
A good subarray will have value $x$ iff it has length $2(x-1)$ and contains $1 \dots x-1$ and $x-1$ elements from $x+1 \dots n$.↵
↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
For each value $x$, count the number of permutations that have it as value for a good subarray. Fix the elements in the subarray, and fix the elements outside of the subarray.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
↵
If $x = 1$ of $x > \frac{n}{2}$, there are $0$ good subarrays with value $x$.↵
↵
Otherwise, let $a = (2x - 2)! \cdot {{n - (x - 1)}\choose{x - 1}}$ and $b = (n - (2x - 2) + 1)!$. The answer for $x$ should be $a \cdot b$.↵
↵
Final complexity is $O(n)$ or $O(n \log n)$ depending on how you compute binomial coefficients.↵
↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
//written by omeganot↵
#include <bits/stdc++.h>↵
↵
using namespace std;↵
↵
using ll = long long;↵
const int MOD = 1E9 + 7;↵
const int INF = 1E9; const ll INFLL = 1E18;↵
↵
const int MAX = 1E5;↵
↵
int fact[MAX + 1]; ↵
int ifact[MAX + 1];↵
↵
int binPow(int base, int exp) {↵
int ans = 1;↵
while(exp) {↵
if(exp % 2) {↵
ans = (1LL * ans * base) % MOD;↵
}↵
base = (1LL * base * base) % MOD;↵
exp /= 2;↵
}↵
return ans;↵
}↵
↵
int nCk(int n, int k) {↵
if(n < k) {↵
return 0;↵
}↵
return (((1LL * fact[n] * ifact[k]) % MOD) * ifact[n - k]) % MOD;↵
}↵
↵
int n;↵
↵
int main() {↵
ios_base::sync_with_stdio(false);↵
cin.tie(0);↵
fact[0] = 1;↵
for(int i = 1; i <= MAX; i++) {↵
fact[i] = (1LL * fact[i - 1] * i) % MOD;↵
}↵
ifact[MAX] = binPow(fact[MAX], MOD - 2);↵
for(int i = MAX - 1; i >= 0; i--) {↵
ifact[i] = (1LL * ifact[i + 1] * (i + 1)) % MOD;↵
}↵
cin >> n;↵
vector<int> ans(n);↵
for(int i = 1; n - (i + 1) >= i; i++) {↵
int ways = (1LL * nCk(n - (i + 2), i - 1) * fact[2 * i]) % MOD;↵
ans[i] = (1LL * ways * fact[n - 2 * i + 1]) % MOD;↵
}↵
for(int i : ans) {↵
cout << i << " ";↵
}↵
cout << "\n";↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [L. Easy Tree Problem](https://codeforces.me/gym/104520/problem/L)↵
↵
<spoiler summary="Solution">↵
There are a few initial observations — we never paint the ancestor of a node after painting the node, as then painting the node itself was a wasted operation. This suggests a DFS from the root downwards, choosing to paint the current node to its intended final color or not.↵
↵
Many people did this problem incorrectly during the contest using a greedy, where you only paint nodes that are not the correct color. It must be noted that it is sometimes optimal to paint a node that is already in the correct color, as it paints the entire subtree as well, which might be cheaper to do at the current node than at the nodes in its subtree.↵
↵
This can be done by considering a DP instead. For each node, there are several states it can already be due to painting operations on its ancestors. It can be painted to its correct final color, never been painted by its ancestor, or painted by a color that is not its correct final color. The DP transitions are not difficult and can be found by simply testing whether you paint the node its final color or not(there is no reason to ever paint a node a color that is not its intended final color). Note that we do not care about the exact color if this node has already been painted and the color is not the correct final color — we will paint this node the correct color and erase the other color, so it doesn't matter.↵
↵
The transitions are all tree edges and the DP states are all tree nodes, so the complexity is $O(n)$ per case.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define int long long↵
#define inf 0x3f3f3f3f3f3f3f3f↵
#define N 200100↵
int n;↵
int icol[N],fcol[N];↵
int par[N];↵
vector<int>adj[N];↵
int cost[N];↵
int dp[N][3];//ancestor never set, ancestor set to same as current, ancestor set to rando↵
void dfs(int u){↵
for(auto a:adj[u]){↵
dfs(a);↵
}↵
dp[u][0]=dp[u][1]=dp[u][2]=inf;↵
int ca=cost[u];↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][0]=min(dp[u][0],ca);↵
if(fcol[u]==icol[u]){↵
ca=0;↵
for(auto a:adj[u]){ca+=dp[a][0];}↵
dp[u][0]=min(dp[u][0],ca);↵
}↵
ca=0;↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][1]=min(dp[u][1],ca);↵
ca=cost[u];↵
for(auto a:adj[u]){ca+=dp[a][(fcol[u]==fcol[a])?1:2];}↵
dp[u][2]=min(dp[u][2],ca);↵
}↵
void solve(){↵
for(int a=0;a<n;++a)adj[a].clear();↵
cin>>n;↵
for(int a=1;a<n;++a){↵
cin>>par[a];↵
adj[--par[a]].push_back(a);↵
}↵
for(int a=0;a<n;++a)cin>>cost[a];↵
for(int a=0;a<n;++a)cin>>icol[a];↵
for(int a=0;a<n;++a)cin>>fcol[a];↵
dfs(0);↵
cout<<dp[0][0]<<endl;↵
}↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [M. Gift Wrapping](https://codeforces.me/gym/104520/problem/M)↵
↵
<spoiler summary="Hint 1">↵
The area asked for in the problem can be found by repeatedly merging overlapping gifts into the smallest rectangle that covers both of them. ↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Let's say we have a set of non-intersecting gifts. If we want to insert a new gift, then we can just find any overlapping gift, merge the two gifts into a bigger gift, then repeat the process with the bigger gift. This will remove all intersections.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Since the gifts are added in increasing order of right edge, to find any intersecting gift with a new gift, we can query for the gift with the largest right edge such that it overlaps with the new gift's right edge on the y-axis and also is greater than the new gift's left edge.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Our problem reduces to supporting a data structure that can perform range max queries, range set updates, and undoing a previous update. We can do this using a segtree but instead of propagating updates, we will consider all updates in the subtree of a node and on the path from the node to the root and take the update with most recent time out of those. Since the set updates are also guaranteed to be in increasing order, we do not even have to store the time. We can maintain a stack at each node in the segment tree as well as which updates have been rolled back. Whenever we perform an update, we add to the stack of every node that is touched by the update. Whenever we rollback an update, we can continually pop from the stack of every node that was touched by the update while the top of the stack has been rollbacked. This guarantees that each node is always up to date. The final complexity is $\mathcal{O}(n \log n)$ memory and complexity.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
```c++↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
#define pb push_back↵
#define ff first↵
#define ss second↵
↵
typedef long long ll;↵
typedef long double ld;↵
typedef pair<int, int> pii;↵
typedef pair<ll, ll> pll;↵
↵
const int INF = 1e9;↵
const ll LLINF = 1e18;↵
const int MOD = 1e9 + 7;↵
↵
void setIO() {↵
ios_base::sync_with_stdio(0); cin.tie(0);↵
}↵
↵
const int N = 100005;↵
const int M = 100000;↵
↵
stack<int> seg[4*N];↵
int mx[4*N];↵
bool in[N];↵
↵
int val(int x){↵
return (seg[x].size() ? seg[x].top() : -1);↵
}↵
↵
void build(int l = 0, int r = M - 1, int cur = 0){↵
mx[cur] = -1;↵
while(!seg[cur].empty()) seg[cur].pop();↵
if(l == r) return;↵
int mid = (l + r)/2;↵
build(l, mid, cur*2 + 1);↵
build(mid + 1, r, cur*2 + 2);↵
}↵
↵
//1 - insert, 0 - remove↵
void update(int l, int r, int v, int t, int ul = 0, int ur = M - 1, int cur = 0) {↵
if(l <= ul && ur <= r){↵
if(t) seg[cur].push(v);↵
else while(seg[cur].size() && !in[seg[cur].top()]) seg[cur].pop();↵
mx[cur] = val(cur);↵
if(ul != ur) mx[cur] = max({mx[cur], mx[cur*2 + 1], mx[cur*2 + 2]});↵
return;↵
}↵
int mid = (ul + ur)/2;↵
if(l <= mid) update(l, r, v, t, ul, mid, cur*2 + 1);↵
if(r > mid) update(l, r, v, t, mid + 1, ur, cur*2 + 2);↵
mx[cur] = max({val(cur), mx[cur*2 + 1], mx[cur*2 + 2]});↵
}↵
↵
int query(int l, int r, int ul = 0, int ur = M - 1, int cur = 0){↵
if(l <= ul && ur <= r) return mx[cur];↵
if(ur < l || ul > r) return -1;↵
int mid = (ul + ur)/2;↵
return max({query(l, r, ul, mid, cur*2 + 1), query(l, r, mid + 1, ur, cur*2 + 2), val(cur)});↵
}↵
↵
pair<pii, pii> merge(pair<pii, pii> a, pair<pii, pii> b){↵
return {{min(a.ff.ff, b.ff.ff), min(a.ff.ss, b.ff.ss)}, {max(a.ss.ff, b.ss.ff), max(a.ss.ss, b.ss.ss)}};↵
}↵
↵
ll area(pair<pii, pii> x){↵
return (ll)(x.ss.ff - x.ff.ff + 1)*(x.ss.ss - x.ff.ss + 1);↵
}↵
↵
int main(){↵
setIO();↵
int t;↵
cin >> t;↵
for(int tt = 1; tt <= t; tt++){↵
build();↵
int n;↵
cin >> n;↵
ll sum = 0;↵
pair<pii, pii> arr[n];↵
for(int i = 0; i < n; i++){↵
cin >> arr[i].ff.ff >> arr[i].ff.ss >> arr[i].ss.ff >> arr[i].ss.ss;↵
arr[i].ss.ff--, arr[i].ss.ss--;↵
in[i] = true;↵
int nxt = query(arr[i].ff.ss, arr[i].ss.ss);↵
while(nxt != -1 && arr[nxt].ss.ff >= arr[i].ff.ff){↵
in[nxt] = false;↵
update(arr[nxt].ff.ss, arr[nxt].ss.ss, nxt, 0);↵
sum -= area(arr[nxt]);↵
arr[i] = merge(arr[i], arr[nxt]);↵
nxt = query(arr[i].ff.ss, arr[i].ss.ss);↵
}↵
update(arr[i].ff.ss, arr[i].ss.ss, i, 1);↵
sum += area(arr[i]);↵
cout << sum << endl;↵
}↵
}↵
}↵
```↵
</spoiler>↵
↵
↵
### [N. Save the Timeline!](https://codeforces.me/gym/104520/problem/N)↵
↵
<spoiler summary="Hint 1">↵
Consider solving the problem without probabilities or queries. What active nodes matter and which don't? Consider the contribution of each node.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Linearity of Expectation makes dealing with probabilities much easier.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 3">↵
Consider how the contribution of each node in the overall structure of the tree changes in each query.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution">↵
First consider the question without the probabilities. Consider the "contribution" of each node to the answer. If a node in the subtree of the current node is active, then this node is essentially not active — we will walk on this node when we walk to the node in the subtree. For every node that is truly active, we must walk from the root to that node and then to the shallowest leaf in its subtree — this way, the "contribution" of each node, if the node is truly active, can be calculated.↵
↵
Now consider with probabilities, but without the queries. Due to linearity of expectation, it suffices to find the sum of the product of the contribution of each node and the probability that it is active.↵
↵
Now consider with the queries. Look at the new contribution of each node — the only nodes that have their contributions change are nodes with query nodes in their subtree. To be more accurate, their contribution is completely removed, and the contributions at each query node must be recalculated. Recalculating contributions at each query node can be easily achieved using prior precomputation on the probability that none of the nodes in its subtree are active. To remove contributions of ancestors of the query nodes, compute prefix sums of the contributions of each nodes' ancestors. Note that by subtracting these, we will double subtract some nodes that are ancestors of several query nodes. To counteract this, sort the query nodes in DFS order and add back the prefix sum of the LCA of adjacent query nodes in DFS order. It is not hard to see that this will remove all double, or more, counting. If a query node is the ancestor of any other query node, we can just ignore the first query node. All of this can be maintained with precomputation. Due to sorting the nodes in the queries, the final complexity is $O(\sum K \log K+N\log N)$.↵
↵
There is one edge case where the only active node is $0$, which can be fixed easily if known. For more details, refer to the solution code.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~~~↵
#include<bits/stdc++.h>↵
using namespace std;↵
#define ll long long↵
#define N 1001000↵
#define J 21↵
#define inf 0x3f3f3f3f↵
#define MOD 998244353↵
int lg2(int x) {↵
assert(x>=0);↵
if(!x){return 0;}↵
return 32 - __builtin_clz(x) - 1;↵
}↵
int n,q;↵
int arr[N];↵
vector<int>adj[N];↵
int mc[N],cost[N];//↵
int tin[N],tout[N],t;//↵
int sprod[N],def[N];↵
int contribution[N],prefix[N];↵
int rin[N],par[N];//↵
int dep[N];//↵
int fst[N];//↵
int ans=0;↵
vector<int>qq;↵
int rd(int x){return (x>=MOD)?x-MOD:x;}↵
void red(int&x){x=((x>=MOD)?x-MOD:x);}↵
namespace lca{↵
vector<int>tour;↵
int sp[2*N][J];↵
void init(){↵
for(int a=0;a<tour.size();++a)sp[a][0]=tour[a];↵
for(int j=1;j<J;++j){↵
for(int a=0;a+(1<<(j-1))<tour.size();++a){↵
sp[a][j]=((dep[sp[a][j-1]]<dep[sp[a+(1<<(j-1))][j-1]])?sp[a][j-1]:sp[a+(1<<(j-1))][j-1]);↵
}↵
for(int a=max(0,(int)tour.size()-(1<<(j-1)));a<tour.size();++a){↵
sp[a][j]=sp[a][j-1];↵
}↵
}↵
}↵
int query(int u,int v){↵
u=fst[u],v=fst[v];↵
if(u>v)swap(u,v);↵
int l=lg2(v-u+1);↵
return (dep[sp[u][l]]<dep[sp[v-(1<<l)+1][l]])?sp[u][l]:sp[v-(1<<l)+1][l];↵
}↵
}↵
void dfs(int u,int p=-1,int d=0){↵
dep[u]=d;↵
lca::tour.push_back(u);↵
fst[u]=lca::tour.size()-1;↵
tin[u]=++t;rin[tin[u]]=u;par[u]=p;↵
mc[u]=(((int)adj[u].size()-(p!=-1)==0)?0:inf);↵
def[u]=1;↵
for(auto a:adj[u]){↵
if(a==p)continue;↵
dfs(a,u,d+1);↵
mc[u]=min(mc[u],mc[a]+1);↵
def[u]=((ll)def[u]*sprod[a])%MOD;↵
lca::tour.push_back(u);↵
}↵
sprod[u]=((ll)def[u]*rd(1-arr[u]+MOD))%MOD;↵
cost[u]=mc[u]+d;↵
contribution[u]=(ll)cost[u]*def[u]%MOD*arr[u]%MOD;↵
if(u==0)cost[u]=contribution[u]=0;↵
ans=rd(ans+contribution[u]);↵
tout[u]=t;↵
}↵
bool isanc(int u,int v){↵
return tin[u]<=tin[v]&&tout[v]<=tout[u];↵
}↵
void solve(){↵
ans=0;↵
for(int a=0;a<n;++a)adj[a].clear();↵
lca::tour.clear();↵
t=0;↵
cin>>n>>q;↵
lca::tour.reserve(2*n);↵
for(int a=0;a<n;++a){↵
cin>>arr[a];↵
}↵
for(int a=0;a<n-1;++a){↵
int u,v;cin>>u>>v;--u,--v;↵
adj[u].push_back(v);↵
adj[v].push_back(u);↵
}↵
dfs(0);↵
for(int i=1;i<=n;++i){const int&a=rin[i];↵
prefix[a]=rd((a?prefix[par[a]]:0)+contribution[a]);↵
}↵
lca::init();↵
for(int a=0;a<q;++a){↵
int k;cin>>k;qq.resize(k,0);↵
for(int b=0;b<k;++b){↵
cin>>qq[b];--qq[b];↵
}↵
sort(qq.begin(),qq.end(),[&](const int&x,const int&y)->bool{return tin[x]<tin[y];});↵
int pp=-1;↵
int cans=ans;↵
for(int i=0;i<qq.size();++i){↵
if(i<qq.size()-1&&isanc(qq[i],qq[i+1]))continue;↵
cans=((ll)cans+MOD-prefix[qq[i]]+(ll)def[qq[i]]*cost[qq[i]]%MOD)%MOD;↵
if(~pp){↵
cans=rd(cans+prefix[lca::query(qq[i],pp)]);↵
}↵
pp=qq[i];↵
}↵
cout<<cans<<"\n";↵
}↵
}↵
↵
signed main(){↵
iostream::sync_with_stdio(false);cin.tie(NULL);↵
int t;cin>>t;↵
for(int a=0;a<t;++a){↵
solve();↵
}↵
}↵
~~~~~↵
</spoiler>↵
↵
↵
### [O. Average Range Query Problem](https://codeforces.me/gym/104520/problem/O)↵
↵
↵
<spoiler summary="Solution">↵
In the original problem, solutions using calculus were able to get past the time limit because we didn't set it high enough. The constraints in the gym have $t \le 10$ and $n \le 200$↵
↵
Let's think of a related problem with $n$ ranges of $[0, 1]$. We can expect the min to be $\frac{1}{n+1}$ and the max to be at $\frac{n}{n+1}$. One way to show this is that after choosing $n$ random points on the line segment going from $0$ to $1$, the average space between any two consecutive points, including the points at $0$ and $1$, is $\frac{1}{n+1}$. Therefore the expected value of the lowest point of $n$ equal ranges $[a, b]$ is $\frac{1}{n+1} \cdot (b-a) + a$.↵
↵
Let's calculate $\mathcal{E}(max)$ and $\mathcal{E}(min)$, the expected value of the maximum and the minimum. Then, we can calculate the range by subtracting them. We demonstrate how we can calculate $\mathcal{E}(min)$. We sort the intervals by increasing order of $l_i$.↵
↵
Consider each interval $[l_i, \min(r_i, l_{i+1})]$.↵
↵
By definition, each problem tester range will either completely intersect or not intersect this interval. Using this, we can calculate the probability that $\mathcal{E}(min)$ is in this interval↵
↵
Let $dp_i$ be the probability that $i$ problem testers choose a difficulty for one such interval. Then, if we know for each interval the probabilities that $x$ problem testers chose a difficulty in the interval and the rest chose a difficulty greater than the interval, to calculate the expected value of the minimum we can multiply all the probabilities with the expected value of the minimum if $x$ random values are chosen in the same range, $\frac{1}{n+1} \cdot (b-a) + a$.↵
↵
Complexity of the solution: $\mathcal{O}(TN^{3})$↵
</spoiler>↵
↵
↵
<spoiler summary="Code (codicon)">↵
~~~~~↵
// O(T*N^3)↵
↵
#include <bits/stdc++.h>↵
↵
using namespace std;↵
↵
int main() {↵
cin.tie(0)->sync_with_stdio(0);↵
↵
int n, t; cin >> n >> t;↵
↵
while (t--) {↵
vector<pair<double, double>> I;↵
↵
for (int _ = 0; _ < n; _++) {↵
double l, r; cin >> l >> r;↵
I.emplace_back(l, r);↵
}↵
↵
vector<double> E;↵
↵
for (int _ = 0; _ < 2; _++) {↵
sort(I.begin(), I.end());↵
E.push_back(0);↵
↵
double R = 3501;↵
for (int i = 0; i < n; i++) {↵
auto [l, r] = I[i];↵
↵
if (l >= R) {↵
break;↵
}↵
↵
R = min(R, r);↵
↵
if (l == r) {↵
double prob_after = 1;↵
for (auto [l2, r2] : I) {↵
if (l > l2) {↵
prob_after *= (r2 - max(l2, l)) / (r2 - l2);↵
}↵
}↵
↵
E.back() -= prob_after * l;↵
}↵
else {↵
vector<double> prob = {1};↵
double a = l, b = min(i < n-1 ? I[i + 1].first : 3500, R);↵
↵
for (auto [l2, r2]: I) {↵
double prob_in_ab, prob_after_ab;↵
↵
if (b <= l2) {↵
prob_in_ab = 0;↵
prob_after_ab = 1;↵
} else {↵
prob_in_ab = (b - a) / (r2 - l2);↵
prob_after_ab = (r2 - b) / (r2 - l2);↵
}↵
↵
prob.push_back(0);↵
for (int k = size(prob) - 1; k >= 0; k--) {↵
prob[k] *= prob_after_ab;↵
if (k) {↵
prob[k] += prob_in_ab * prob[k - 1];↵
}↵
}↵
}↵
↵
for (int k = 1; k < size(prob); k++) {↵
E.back() -= prob[k] * (a + (b - a) / (k + 1));↵
}↵
}↵
}↵
↵
for (int i = 0; i < n; i++) {↵
I[i] = {-I[i].second, -I[i].first};↵
}↵
}↵
↵
cout << fixed << setprecision(4) << E[0] + E[1] << '\n';↵
}↵
}↵
↵
~~~~~↵
↵
↵
</spoiler>↵
↵
### [P. Omer and Intervals](https://codeforces.me/gym/104520/problem/P)↵
↵
<spoiler summary="Hint 1">↵
Interval endpoint values doesn't matter, only matters their relative order, so we can do cordinate compression.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Each group will have at least one point $x$ which will be contained in every interval of that group.↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
We can easily check each pair of points $x$ (group $A$) and $y$ (group $B$) in $\mathcal{O}(n^{3})$ counting the intervals that contains $x$, $y$ and both and distributing them optimally.↵
</spoiler>↵
↵
<spoiler summary="Hint 4">↵
It's possible to optimize the previous solution to $\mathcal{O}(n^{2})$ with prefix sums and frequency arrays.↵
</spoiler>↵
↵
<spoiler summary="Hint 5">↵
Can we do it better? Only checking every possible $x$? Binary search the answer?↵
</spoiler>↵
↵
<spoiler summary="Hint 6">↵
Can we do it better? Sweepline? Lazy Segment Tree?↵
</spoiler>↵
↵
<spoiler summary="Hint 7">↵
If we want to add intervals to group $A$, with a fixed intersection point $x$ and a fixed size $k$, it will be optimal to take the smallest $k$ intervals by right endpoint that contain $x$.↵
</spoiler>↵
↵
<spoiler summary="Hint 8">↵
With binary search the answer, lazy segment tree to find maximum on range and update with sum on range, sweepline maintaining taken and to take intervals and taking for each index $i$ the k (from the binary search) smallest by right endpoint intervals that contain $i$ we can solve this problem in $\mathcal{O}(n \cdot log^2{(n)})$.↵
</spoiler>↵
↵
<spoiler summary="Hint 9">↵
We can do it better? Removing binary search the answer? Greedy?↵
</spoiler>↵
↵
<spoiler summary="Hint 10">↵
To achieve a $\mathcal{O}(n \cdot log{(n)})$ complexity and solve the problem, we can remove the bianry search and just try to make both groups as close as possible at each step, because size of group A will decrease faster than group B one, because the smallest intervals by right endpoint are in A, and in the sweepline we are moving $x$ to the right.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Please, read all the hints if you haven't.↵
↵
To solve this problem, we will do a sweep line, checking every possible intersection point of group $A$, $x$ where $x \leq y$ and $y$ is the intersection point of group $B$.↵
↵
We will maintain the taken intervals on group $A$, the possible to take intervals and a lazy segment tree representing how many non taken intervals contains each point.↵
↵
While size of group $A <$ max on range $[x, 2 \cdot n]$ we will add intervals to group $A$ in asceding order of right endpoint.↵
↵
Note that, after the cordinate compression, the each interval endpoint value will be $\leq 2 \cdot n$.↵
↵
While doing the sweep line we will have to mainting operations of adding new segments, removing other, etc. We can do this with multisets and priority queues.↵
↵
The answer will be the maximum size of group A at any point.↵
↵
Complexity of the solution: $\mathcal{O}(n \cdot log{(n)})$.↵
↵
</spoiler>↵
↵
<spoiler summary="Code">↵
~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
const int N = 3e5;↵
↵
int t, n, mx, vt[N*2], f[N*2 + 1], c[N*2 + 1], seg[N*8], laz[N*8];↵
pair<int, int> v[N];↵
vector<int> lp[N*2 + 1], mp;↵
↵
bool cmp(const pair<int, int>& a, const pair<int, int>& b) {↵
return a.second < b.second;↵
}↵
↵
void build(int p=1, int l=0, int r=mx) {↵
if (l == r) {↵
seg[p] = c[l];↵
laz[p] = 0;↵
return;↵
}↵
↵
int mid = (l+r)>>1;↵
build(p<<1, l, mid);↵
build((p<<1) + 1, mid+1, r);↵
↵
seg[p] = max(seg[p<<1], seg[(p<<1) + 1]);↵
laz[p] = 0; ↵
}↵
↵
void pushDown(int p) {↵
int h = p<<1;↵
seg[h] += laz[p];↵
laz[h] += laz[p];↵
↵
h++;↵
seg[h] += laz[p];↵
laz[h] += laz[p];↵
↵
laz[p] = 0;↵
}↵
↵
void update(int x, int a, int b, int p=1, int l=0, int r=mx) {↵
if (r < a || b < l) return;↵
if (a <= l && r <= b) {↵
seg[p] += x;↵
laz[p] += x;↵
return;↵
}↵
if (laz[p]) pushDown(p);↵
↵
int mid = (l+r)>>1;↵
update(x, a, b, p<<1, l, mid);↵
update(x, a, b, (p<<1) + 1, mid+1, r);↵
↵
seg[p] = max(seg[p<<1], seg[(p<<1) + 1]);↵
}↵
↵
↵
int query(int a, int b, int p=1, int l=0, int r=mx) {↵
if (a <= l && r <= b) {↵
return seg[p];↵
}↵
if (laz[p]) pushDown(p);↵
↵
int mid = (l+r)>>1;↵
if (b <= mid) {↵
return query(a, b, p<<1, l, mid);↵
}↵
else if (a >= mid+1){↵
return query(a, b, (p<<1) + 1, mid+1, r);↵
}↵
else {↵
return max(query(a, b, p<<1, l, mid), query(a, b, (p<<1) + 1, mid+1, r));↵
}↵
}↵
↵
int solve() {↵
build();↵
priority_queue<int> toTake;↵
multiset<int> taken;↵
int cnt = 0;↵
for (int& i : lp[0]) {↵
if (cnt < query(0, mx)) {↵
cnt++;↵
taken.insert(-v[i].second);↵
update(-1, 0, v[i].second);↵
}↵
else {↵
toTake.push(-v[i].second);↵
}↵
}↵
↵
int ans = 0;↵
for (int i = 0; i < mx; i++) {↵
ans = max(ans, min(cnt, query(i, mx)));↵
↵
for (int& j : lp[i+1]) {↵
toTake.push(-v[j].second);↵
}↵
↵
while (!toTake.empty() && -toTake.top() <= i) toTake.pop();↵
if (taken.find(-i) != taken.end()) {↵
taken.erase(-i);↵
cnt = taken.size();↵
}↵
↵
while (!toTake.empty() && !taken.empty()) {↵
int y = -toTake.top();↵
↵
auto it2 = taken.begin();↵
int y2 = -(*it2);↵
↵
if (y2 <= y) break;↵
↵
update(1, y+1, y2);↵
↵
taken.erase(it2);↵
taken.insert(-y);↵
↵
toTake.pop();↵
toTake.push(-y2);↵
}↵
↵
while (cnt < query(i+1, mx) && !toTake.empty()) {↵
int y = -toTake.top();↵
↵
update(-1, i+1, y);↵
cnt++;↵
↵
taken.insert(-y);↵
toTake.pop();↵
}↵
}↵
ans = max(ans, min(cnt, query(mx, mx)));↵
↵
return ans;↵
}↵
↵
signed main() {↵
ios::sync_with_stdio(false); cin.tie(nullptr);↵
↵
cin >> t;↵
while (t--) {↵
cin >> n;↵
↵
mp.clear();↵
for (int i = 0; i <= (n<<1); i++) {↵
f[i] = c[i] = 0;↵
lp[i].clear();↵
}↵
↵
for (int i = 0; i < n; i++) {↵
int x, y;↵
cin >> x >> y;↵
v[i] = make_pair(x, y);↵
vt[i<<1] = x;↵
vt[(i<<1)+1] = y;↵
}↵
sort(v, v+n, cmp);↵
sort(vt, vt + (n<<1));↵
↵
mx = -1;↵
for (int i = 0; i < (n<<1); i++) {↵
if (i && vt[i] == vt[i-1]) continue;↵
mp.push_back(vt[i]);↵
mx++;↵
}↵
↵
for (int i = 0; i < n; i++) {↵
v[i].first = lower_bound(mp.begin(), mp.end(), v[i].first) - mp.begin();↵
v[i].second = lower_bound(mp.begin(), mp.end(), v[i].second) - mp.begin();↵
↵
f[v[i].first]++;↵
f[v[i].second+1]--;↵
↵
lp[v[i].first].push_back(i);↵
}↵
↵
for (int i = 0; i <= mx; i++) {↵
c[i] = (i ? c[i-1] : 0);↵
c[i] += f[i];↵
}↵
↵
cout << solve() << "\n";↵
}↵
}↵
~~~↵
</spoiler>↵
↵
↵
### [Q. Beautiful Matrix Counting](https://codeforces.me/gym/104520/problem/Q)↵
↵
<spoiler summary="Hint 1">↵
Each column has either 1 ones, 2 ones, or no ones. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Column $i-k$ has the same number of ones as column $i$ ($k<i\leq n$). This property allows us to only consider the first $k$ positions, with some caveats. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 3">↵
The issue with counting the total number is the number of columns with only 1 ones. Try enumerating over this value, and solve the third subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 4">↵
When $k$ does not divide $n$, then we can split this into two cases: the first $n-k\cdot\lfloor n/k\rfloor$, and the other elements.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 1.5">↵
Enumerate over the number of one columns, and the number of one columns in the remainder. This gives us a solution to the second subtask.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 1.6">↵
Can we optimize this further using FFT.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 1">↵
Define $m=\lfloor n/k\rfloor$ and $r=n-km$. If there are $x$ one-columns in each submatrix, and $y$ of those columns are in the remainder, the answer is given by ↵
↵
$$↵
\begin{align*}↵
{r\choose y}{k-r\choose x-y}{k-x\choose\frac{s-x}{2}}\cdot 2^{xm+y}.↵
\end{align*}↵
$$↵
↵
Thus, we can solve the first subtask by precomputing factorials and inverse factorials and directly summing over all $x$ and $y$:↵
↵
$$↵
\begin{align*}↵
\sum_{x=0}^{\min(k,s)}\sum_{y=0}^{\min(x,r)}{r\choose y}{k-r\choose x-y}{k-x\choose\frac{s-x}{2}}\cdot 2^{xm+y}.↵
\end{align*}↵
$$↵
↵
Expanding the binomial coefficients and simplifying, we get the expression↵
↵
$$↵
\begin{align*}↵
r!(k-r)!\cdot\sum_{x=0}^{\min(k,r)} {k-x\choose\frac{s-x}{2}}2^{xm}\cdot \left(\sum_{y=0}^{\min(x,r)} \frac{2^y}{y!(r-y)!((k-r)-(x-y))!}\right)↵
\end{align*}↵
$$↵
↵
In particular, define $f(x)$ to be ↵
↵
$$↵
\begin{align*}↵
f(x)=\sum_{y=0}^{\min(x,r)} \frac{2^y}{y!(r-y)!((k-r)-(x-y))!}.↵
\end{align*}↵
$$↵
↵
It's clear that $f(x)$ is the convolution of $g(x)$ and $h(x)$, defined as↵
↵
$$↵
\begin{align*}↵
g(x)=\frac{2^x}{x!(r-x)!}\cdot[x\leq r]↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
h(x)=\frac{1}{x!(k-r-x)!}↵
\end{align*}↵
$$↵
↵
We can thus evaluate $f$ in $\mathcal{O}(n\log n)$ time, and directly evaluate the sum for the answer.↵
↵
**Time Complexity:** $\mathcal{O}(n\log n)$↵
</spoiler>↵
↵
<spoiler summary="Code">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
#include <atcoder/modint>↵
#include <atcoder/convolution>↵
↵
using mint = atcoder::modint998244353;↵
↵
int main() {↵
std::ios_base::sync_with_stdio(false);↵
std::cin.tie(NULL);↵
↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
long long n, k, s;↵
std::cin >> n >> k >> s;↵
std::vector<mint> pp(k + 1), ss(k + 1);↵
↵
pp[0] = 1;↵
for (int i = 1; i <= k; i++)↵
pp[i] = pp[i - 1] * atcoder::modint::raw(i);↵
ss[k] = 1 / pp[k];↵
for (int i = k - 1; i >= 0; i--)↵
ss[i] = ss[i + 1] * atcoder::modint::raw(i + 1);↵
auto cc = [&](int nn, int kk) { return nn < kk ? 0 : pp[nn] * ss[kk] * ss[nn - kk]; };↵
↵
int r = n % k;↵
long long m = n / k;↵
↵
mint p2 = 1;↵
std::vector<mint> g(r + 1), h(std::min(k, s) + 1);↵
for (int i = 0; i <= r; i++) {↵
g[i] = p2 * ss[i] * ss[r - i];↵
p2 *= atcoder::modint::raw(2);↵
}↵
for (int i = 0; i <= std::min(k, s); i++) {↵
h[i] = ss[i] * (k - r - i >= 0 ? ss[k - r - i] : 0);↵
}↵
↵
auto f = atcoder::convolution(g, h);↵
f.resize(std::min(k, s) + 1, 0);↵
mint ans = 0;↵
for (int x = s % 2; x <= std::min(k, s); x += 2)↵
ans += cc(k - x, (s - x) / 2) * mint(2).pow(x * m) * f[x];↵
ans *= pp[r] * pp[k - r];↵
std::cout << ans << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2.5">↵
Consider a quadratic-time DP solution to the second subtask. Because the transitions are relatively simple, can we optimize this further?↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 2">↵
Note: The problem author does not fully understand this solution.↵
↵
In the case where $n$ is divisible by $k$, the answer is the coefficient of $x^s$ in the expansion of $(1 + 2^{\frac{n}{k}}x + x^2)^k$. This is derived directly from the knapsack-like recurrence, or can be derived indirectly through generating functions. In any case, we can use FFT to evaluate this polynomial in $\mathcal{O}(n\log^2 n)$ time using binary exponentiation, which may or may not pass the time limit.↵
↵
To optimize this, there are a couple of options. Firstly, we can use matrix multiplication to perform fast expansion of trinomials, as described [here](https://cs.uwaterloo.ca/journals/JIS/VOL9/Noe/noe35.pdf). Secondly, note that for a formal power series $f$, we have $s^n=e^{\ln s^n}=e^{n\ln s}$. The logarithm and exponential of a FPS can be found using standard methods in $\mathcal{O}(n\log n)$ time, so this solution also runs in $\mathcal{O}(n\log n)$. ↵
↵
To adapt this to the full problem, it sufficies to split the recurrence into two parts, the remainder and the non-remainder, and multiply the answers. ↵
↵
**Time Complexity:** $\mathcal{O}(n\log n)$↵
</spoiler>↵
↵
↵
### [R. Oil Fields](https://codeforces.me/gym/104520/problem/R)↵
↵
<spoiler summary="Hint 1.1">↵
Constraints seem to suggest an $\mathcal{O}(n^3)$ solution. Try to reformulate this problem as an all-pairs shortest path problem.↵
</spoiler>↵
↵
↵
<spoiler summary="Solution 1">↵
Let the points be denoted by $a_1, a_2, \dots, a_n$, sorted by $x$ coordinate. Without loss of generality, assume that all points have distinct $x$ coordinates. Define $p$ and $s$ such that ↵
↵
$$↵
\begin{align*}↵
p_i=\sum_{j=1}^{i-1}w_j↵
\end{align*}↵
$$↵
↵
$$↵
\begin{align*}↵
s_i=\sum_{j=i+1}^{n}w_j↵
\end{align*}↵
$$↵
↵
In other words, $p_i$ is the sum of weights of points to the left of point $i$ and $s_i$ is the sum of weights of points to the right of point $i$. Define $h_{i,j}$ as ↵
↵
$$↵
h_{i,j}=\begin{cases}↵
i<j&\sum_{k=i+1}^{j-1}w_k\cdot[a_k\text{ is above }\overline{a_ia_j}]\\↵
i>j&\sum_{k=i+1}^{j-1}w_k\cdot[a_k\text{ is below }\overline{a_ja_i}]↵
\end{cases}↵
$$↵
↵
Create a graph of $n$ points numbered $1,2,\dots,n$, where the edge from $i$ to $j$ has weight $m\cdot \text{distance}(a_i,a_j)+h_{i,j}$. Let $d_{i,j}$ denote the shortest path between nodes $i$ and $j$ in the graph, and let $W$ be the sum of weights over all points. Then, the maximum profit Alice can make is ↵
↵
$$↵
\max_{i,j} (W-p_i-s_j-d_{i,j}-d_{j,i}).↵
$$↵
↵
The matrix $h$ can be naively calculated in $\mathcal{O}(n^3)$ (which can be further optimized to $\mathcal{O}(n^2)$), and the all-pairs shortest path can be calculated using the Floyd-Warshall algorithm in $\mathcal{O}(n^3)$. ↵
↵
**Time Complexity:** $\mathcal{O}(n^3)$↵
</spoiler>↵
↵
<spoiler summary="Code 1">↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
↵
template<typename T>↵
struct point {↵
T x, y;↵
↵
point &operator+=(const point &rhs) {↵
x += rhs.x, y += rhs.y;↵
return *this;↵
}↵
↵
point &operator-=(const point &rhs) {↵
x -= rhs.x, y -= rhs.y;↵
return *this;↵
}↵
};↵
↵
template<typename T>↵
point<T> operator+(point<T> lhs, const point<T> &rhs) { return lhs += rhs; }↵
↵
template<typename T>↵
point<T> operator-(point<T> lhs, const point<T> &rhs) { return lhs -= rhs; }↵
↵
template<typename T>↵
T operator*(const point<T> &lhs, const point<T> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y; }↵
↵
template<typename T>↵
T operator^(const point<T> &lhs, const point<T> &rhs) { return lhs.x * rhs.y - lhs.y * rhs.x; }↵
↵
template<typename T> inline T sq(T x) { return x * x; }↵
↵
template<typename T>↵
double dis(const point<T> &lhs, const point<T> &rhs) { return sqrt(sq((double)lhs.x-rhs.x) + sq((double)lhs.y-rhs.y)); }↵
↵
int main() {↵
int tc;↵
std::cin >> tc;↵
while (tc--) {↵
int n;↵
long long m, c;↵
std::cin >> n >> m >> c;↵
long long v = 0;↵
std::vector<std::pair<point<long long>, int>> a(n);↵
for (int i = 0; i < n; i++) {↵
long long x, y, w;↵
std::cin >> x >> y >> w;↵
v += w;↵
a[i] = {point<long long>{x, y}, w};↵
}↵
↵
std::sort(a.begin(), a.end(), [](auto &&a, auto &&b) {↵
return a.first.x < b.first.x || (a.first.x == b.first.x && a.first.y < b.first.y);↵
});↵
↵
std::vector<long long> ps(n), ss(n);↵
ps[0] = a[0].second, ss[n - 1] = a[n - 1].second;↵
for (int i = 1; i < n; i++)↵
ps[i] = ps[i - 1] + a[i].second;↵
for (int i = n - 2; i >= 0; i--)↵
ss[i] = ss[i + 1] + a[i].second;↵
↵
std::vector gk(n, std::vector<long long>(n, 0));↵
for (int i = 0; i < n; i++) {↵
for (int j = i + 1; j < n; j++) {↵
auto s = a[i].first, t = a[j].first;↵
double m = (double)(s.y - t.y) / (s.x - t.x);↵
for (int k = i + 1; k < j; k++) {↵
auto u = a[k].first;↵
if (u.y - s.y > m * (u.x - s.x)) {↵
gk[i][j] += a[k].second;↵
} else {↵
gk[j][i] += a[k].second;↵
}↵
}↵
}↵
}↵
↵
std::vector g(n, std::vector<double>(n, 0));↵
for (int i = 0; i < n; i++) ↵
for (int j = 0; j < n; j++)↵
g[i][j] = gk[i][j] + m * dis(a[i].first, a[j].first);↵
for (int k = 0; k < n; k++)↵
for (int i = 0; i < n; i++)↵
for (int j = 0; j < n; j++)↵
g[i][j] = std::min(g[i][j], g[i][k] + g[k][j]);↵
↵
double ans = -c;↵
for (int i = 0; i < n; i++) {↵
for (int j = i; j < n; j++) {↵
auto px = i == 0 ? 0 : ps[i - 1];↵
auto sx = j == n - 1 ? 0 : ss[j + 1];↵
ans = std::max(ans, v - px - sx - g[i][j] - g[j][i]);↵
}↵
}↵
ans -= c;↵
std::cout << std::fixed << std::setprecision(6) << ans << '\n';↵
}↵
}↵
~~~~~↵
↵
↵
</spoiler>↵
↵
<spoiler summary="Hint 2.1">↵
Try doing a DP in angular-sweep order for each point. ↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2.2">↵
Optimize the transition from $\mathcal{O}(n)$ to $\mathcal{O}(1)$ by using a push-forward DP and a second angular sweep.↵
</spoiler>↵
↵
<spoiler summary="Solution 2">↵
First, we describe a way to efficiently handle triangle queries. This is a necessary step for this solution.↵
↵
Let $\tau_{i,j,k}$ denote the sum of weights of points contained in $\triangle a_ia_ja_k$. Without loss of generality, assume that $a_i, a_j, a_k$ are in clockwise order. A triangle query can be decomposed into a combination of several halfplane and sector queries (described below). A halfplane query $\eta_{i,j}$ is defined as the sum of weights of points to the right of $\overleftrightarrow{a_ia_j}$. A sector query $\sigma_{i,j,k}$ is defined as the sum of weights of points located in the sector defined by rays $\overrightarrow{a_ja_i}$ and $\overrightarrow{a_ja_k}$. Let $W=\sum_{i=1}^n w_i$. We have↵
↵
$$↵
\tau_{i,j,k}=\sigma_{i,j,k}+\sigma_{k,i,j}+\sigma_{j,k,i}-\eta_{i,j}-\eta_{j,k}-\eta_{k,i}+W.↵
$$↵
↵
Consider enumerating each point $a_i$ and sorting all other points about $a_i$. This can be done for all points in $\mathcal{O}(n^2)$ or $\mathcal{O}(n^2\log n)$. We can calculate $\eta_{i,j}$ for all points $j$ with a simple radial sweep. For angular queries, let $\rho_{i,j}$ be the sum of weights of points in the sector between the vertical ray starting at $a_i$ and $\overrightarrow{a_ia_j}$. Then, we have ↵
↵
$$↵
\sigma_{i,j,k}=\lvert \rho_{j,k}-\rho_{j,i} \rvert.↵
$$↵
↵
Thus, triangle queries can be preprocessed in $\tilde{\mathcal{O}}(n^2)$ and done in constant time.↵
↵
Consider enumerating the lowest point in the hull, and let this point be $p$. Let the points with strictly greater $y$ coordinate be denoted by $a_1, a_2, \dots, a_k$, sorted by angle from $p$. Define states $f_{i,j}$ with $j<i$ as the maximum cost convex polygon that contains the segments $\overline{a_ia_j}$ and $\overline{a_ip}$. Transitions can be handled as follows:↵
↵
$$↵
\begin{align*}↵
f_{i,j}=\max_{k | \overline{a_ja_i}\text{ turns left from }\overline{a_ka_j}} f_{j,k} + \tau_{p,i,j} - m \cdot \text{distance}(a_i,a_j).↵
\end{align*}↵
$$↵
↵
This can be optimized to $\mathcal{O}(n^3)$ by using a push forward DP, rotating around the point $j$. The details are left as an exercise to the reader. ↵
↵
**Time Complexity:** $\mathcal{O}(n^3)$↵
</spoiler>↵
↵
↵
<spoiler summary="Code 2">↵
~~~~~↵
//A solution is omitted :D↵
~~~~~↵
</spoiler>↵
↵
↵
### [S. Farming Negative Karma](https://codeforces.me/gym/104520/problem/S)↵
↵
<spoiler summary="Hint 1">↵
The problem is equivalent to counting the number of cells that have been tilled exactly some amount of times and subtracting the amount of cells that have occurred $< k$ times from the size of the query rectangle.↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
We can sweep on the x-axis. At each position on the x-axis, we can query the number of cells that have been tiled $< k$ times by storing the $k$ smallest values as well as how many times they occur in each node of a segment tree. The update rectangles can be decomposed into range add for the left edge and range subtract for the right edge. How to do queries?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
Lets say that each different position on the x-axis is a new version of our segment tree. Each query rectangle can be separated into two queries for the left and right edge where we want to find the number of time a value occurs in a range over all versions of the segment tree, ie. a historic sum.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
This problem was inspired by [BPC 1 S6 — Painters](https://dmoj.ca/problem/bpc1s6)↵
↵
For each node on the segment tree, we will maintain the $k$ smallest values and the number of times they occur, as well as the historic sum for each of the relevant $k$ values. In each lazy tag, we will store the $k$ smallest values it reaches as well as the amount of time spent at that value and the final value of the tag. This allows us to maintain a segment tree where each update creates a new version of the segment tree. However, there can be multiple updates per version, so we will either split each x position into multiple range updates such that each index is covered by exactly one update (this is a bit slow and also untested so it may or may not pass the time limit) or we can maintain the most recent version that each node was updated and consider that during the propagation. The time complexity is $\mathcal{O}(n \cdot k^2 \log n)$ and memory complexity is $\mathcal{O}(n \cdot k)$.↵
</spoiler>↵
↵
<spoiler summary="Code">↵
Let's be honest if you're trying to solve this you already have coach mode so just go use that instead.↵
</spoiler>↵
↵
Happy Upsolving! As a hint towards a possible problem on the next contest: ↵
↵
<spoiler summary="Solution"> ![kelp](/predownloaded/05/a7/05a719e9b55e50a2339584b16d2840076b2d9860.png) </spoiler>↵
↵