Thank you for participating! We put a lot of effort into this contest. Special thanks to TheScrasse for contributing to these problems.
Person | A | B | C1 | C2 | D | E | F | G | H |
---|---|---|---|---|---|---|---|---|---|
oursaco | 800 | 1200 | 1500 | 1800 | 2200 | 2400 | 2800 | 2700 | 3400 |
priyanshu.p | 800 | 1200 | 1300 | 1600 | 2200 | ||||
TheScrasse | 800 | 1200 | 1400 | 1700 | 2100 | 2300 | 2900 | 2900 | 3500 |
cry | 800 | 1000 | 1300 | 1700 | 2100 | 2200 | 2800 | 2900 | 3500 |
buffering | 800 | 1000 | 1500 | 1900 | 2100 | 2200 | 2700 | 2800 | 3500 |
JagguBandar | 800 | 1200 | 1700 | 2000 | 1800 | 2300 | |||
omeganot | 900 | 1000 | 1400 | 1800 | 2200 | 2300 | 2600 | ||
drdilyor | 800 | 1300 | 1500 | 1800 | 2100 | 2200 | |||
jay_jayjay | 900 | 1400 | 1300 | 1600 | 1800 | 2300 | 2600 | 2800 | 3500 |
chromate00 | 900 | 1300 | 1400 | 1700 | 2200 | 2400 | 2700 | 2900 | 3400 |
maxrgby | 800 | 1200 | 1300 | 1700 | 2100 | ||||
SunShine11 | 800 | 1300 | 1500 | 1800 | 2100 | ||||
MridulAhi | 800 | 1200 | 1500 | 2000 | 1900 | 2200 | 2600 | 2800 | |
WAtoAC2001 | 800 | 1300 | 1600 | 2000 | 2300 | 2800 | |||
rainboy | 2100 | 2400 | 2800 | 2700 | 3400 | ||||
Geothermal | 800 | 900 | 1300 | 1700 | 1800 | 2300 | 2600 | 2800 | 3500 |
lethan3 | 800 | 1100 | 1500 | 1900 | 2200 | ||||
awesomeguy856 | 800 | 1200 | 1000 | 1500 | 2100 | 2100 | 2400 | ||
camc | 800 | 1100 | 1500 | 1900 | 2100 | 2200 | |||
rqi | 1000 | 1500 | 1600 | 1900 | 2000 | 2300 | 2700 | ||
sum | 2000 | 2200 | 2600 |
A | B | C1 | C2 | D | E | F | G | H | |
---|---|---|---|---|---|---|---|---|---|
Average | 811 | 1189 | 1455 | 1810 | 2100 | 2273 | 2709 | 2770 | 3463 |
Actual | 800 | 1100 | 1300 | 1700 | 2100 | 2300 | 2700 | 2800 | 3500 |
Solutions
1942A - Farmer John's Challenge
Problem Credits: buffering
Analysis: buffering
Solve for $$$k = 1$$$.
$$$a = 1, 2, 3 \dots n$$$ works. Why?
Solve for $$$k = n$$$.
$$$a = 1, 1, 1 \dots 1$$$ works. Why?
What other $$$k$$$ work for a given $$$n$$$?
Only $$$k = 1$$$ and $$$k = n$$$ have possible answers.
Read the hints.
For $$$k=1$$$, the construction $$$1,2,3\dots n$$$ will always work because in any other cyclic shift, $$$n$$$ will be before $$$1$$$.
Now, consider what happens if we want to find an array with two or more cyclic shifts that are sorted. If we consider two of these cyclic shifts, notice they are actually shifts of each other as well.
So, there should be a sorted array, and a shift of it which is also sorted. What this means is that some of the first elements in the array move to the back and stays sorted, which can only happen if all values in the array are equal.
If the array has all equal values, this gives us our construction for $$$k=n$$$. As we have seen, for $$$k>1$$$, only $$$k=n$$$ is possible. Thus, for any other $$$k$$$ not equal to $$$1$$$ or $$$n$$$, we can print $$$-1$$$.
#include <iostream>
using namespace std;
int main(){
int t; cin >> t;
while(t--){
int n, k; cin >> n >> k;
if(k == 1) for(int i = 0; i < n; i++) cout << i + 1 << " ";
else if(k == n) for(int i = 0; i < n; i++) cout << 1 << " ";
else cout << -1;
cout << "\n";
}
}
1942B - Bessie and MEX
Problem Credits: buffering
Analysis: buffering
Solution 1
We will construct the solution forward.
Separate the $$$a_i$$$'s given into negative and positive cases. What does this tell us about the $$$\texttt{MEX}$$$?
We can find $$$p_1, p_2, ... , p_n$$$ in order, looking at positive and negative cases. Note that $$$a_i \neq 0$$$ because $$$p_i$$$ would equal $$$\texttt{MEX}$$$($$$p_1 \dots p_i$$$), which can never happen.
- If $$$a_i > 0$$$, then $$$\texttt{MEX}$$$($$$p_1, p_2, ... p_i$$$) must increase from $$$\texttt{MEX}$$$($$$p_1, p_2, ... p_{i-1}$$$), so we know $$$p_i$$$ must equal $$$\texttt{MEX}$$$($$$p_1, p_2, ... p_{i-1}$$$).
- Otherwise, the $$$\texttt{MEX}$$$ stays the same, so $$$p_i$$$ is just simply $$$\texttt{MEX}$$$($$$p_1, p_2, ... p_{i-1}$$$) — $$$a_i$$$.
Thus, we can just maintain the $$$\texttt{MEX}$$$ and find each $$$p_i$$$ as we go forward.
There are only 2 things the $$$\texttt{MEX}$$$ can do: increase or stay the same (it can never decrease since larger prefixes contain smaller ones).
- In the case of a positive difference, consider what would happen if the $$$\texttt{MEX}$$$ stayed the same. Since the difference is positive, the $$$\texttt{MEX}$$$ would have to be greater than the current value, which is impossible because that value had to appear earlier on in the prefix. Since the $$$\texttt{MEX}$$$ is calculated on a permutation that can’t happen. So the $$$\texttt{MEX}$$$ has to increase.
- In the case of a negative value, the $$$\texttt{MEX}$$$ has to be less than the current value. But if it increased that means the current value changed its the $$$\texttt{MEX}$$$, meaning its the $$$\texttt{MEX}$$$ is now at least (current value + 1) and it is actually greater. So it has to stay the same.
Note that this is also a way to show $$$p$$$ is always unique.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n; cin >> n;
vector<int> a(n);
for(int& i: a) cin >> i;
vector<int> p(n), has(n + 1);
int mex = 0;
for(int i = 0; i < n; i++){
if(a[i] >= 0){
p[i] = mex;
}
else{
p[i] = mex - a[i];
}
has[p[i]] = true;
while(has[mex]) mex++;
}
for(int i: p) cout << i << " ";
cout << "\n";
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int T = 1;
cin >> T;
while(T--) solve();
}
/* /\_/\
* (= ._.)
* / > \>
*/
Solution 2
We will construct the solution backwards.
The $$$\texttt{MEX}$$$ is determined at the last index, since all of $$$0, 1 \dots n-1$$$ appear in $$$p$$$.
Read the hints.
Since we know the $$$\texttt{MEX}$$$ of the last position is $$$n$$$, then $$$n - p_n = a_n$$$. From this equation, we can find that $$$p_n = n - a_n$$$.
Now, because we know $$$p_n$$$, we can determine the $$$\texttt{MEX}$$$ of the first $$$n-1$$$ numbers. Like how we found $$$p_n$$$, we can do a similar process for finding $$$p_{n-1}$$$. Doing this for $$$i = n, n - 1 \dots 1$$$ will get us a valid answer $$$p$$$.
Note that this is also a way to show $$$p$$$ is always unique.
1942C1 - Bessie's Birthday Cake (Easy Version) and 1942C2 - Bessie's Birthday Cake (Hard Version)
Problem Credits: cry
Analysis: cry
Ignoring $$$n$$$ for now, let's just focus on the $$$x$$$ chosen vertices. Sort the $$$x$$$ vertices and connect adjacent vertices to form its own smaller polygon. By drawing out some cases or if you're familiar with triangulation (video that proves by induction), you can form $$$x - 2$$$ triangles by drawing diagonals in a polygon with $$$x$$$ vertices. If you don't know it, one possible construction that always work is fixing one vertex $$$v$$$ and drawing diagonals to every other vertex not adjacent to $$$v$$$. Now, let's consider the original $$$n$$$-sided polygon. In additional to the aforementioned construction, to close the $$$x$$$-sided polygon up: for every non-adjacent vertex in the chosen vertices based on the bigger $$$n$$$-sided polygon, draw a diagonal between them. Through this construction, we can always guarantee $$$x - 2$$$ triangles.
However, this doesn't account for all triangles, as some triangles can form sides with the bigger $$$n$$$-sided polygon. These triangles occur exactly when two adjacent vertex in $$$x$$$ have exactly one vertex not in $$$x$$$ between them but part of the bigger polygon. This is because one side is from the diagonals we drew, and the other two sides form from the $$$n$$$-sided polygon.
Therefore, the answer is $$$x - 2$$$ + (number of adjacent chosen vertices $$$2$$$ apart). Note that the first chosen vertex is also adjacent to last chosen vertex in $$$x$$$.
Read the solution to the easy version first.
We can now reduce the problem to the following: For every vertex we choose, we can make one more triangle. For every chosen vertices two apart, we can make one more triangle. Let's focus on the latter condition. To maximize the effect of our $$$y$$$ vertices, we want to prioritize vertices $$$v$$$ that satisfy the following: the vertex is not included in $$$x$$$ and there is a chosen vertex two apart. Let's denote such $$$v$$$ as good.
Let vertex $$$a$$$ and $$$b$$$ $$$(a < b)$$$ be adjacent chosen vertices in the $$$x$$$-sided polygon. There are $$$g = b - a - 1$$$ vertices for us to choose between them. There are two cases: if $$$g$$$ is odd, then we can choose $$$\lfloor \frac{g}{2} \rfloor$$$ vertices to form $$$\lfloor \frac{g}{2}\rfloor + 1$$$ extra triangles. This is because all of the vertices we choose will be good. if $$$g$$$ is even, then we can choose $$$\frac{g}{2}$$$ vertices to make $$$\frac{g}{2}$$$ extra triangles. This is because one of the vertices we choose will not be good. This shows that it is always optimal to process smaller and odd $$$g$$$ first.
After processing these adjacent gaps, if we have any leftover vertices, we can just ignore them. This is because since we have maximized the number of good vertices, even though any further vertex we place will increase the answer by $$$1$$$, it will break two good vertices which will decrease the answer by $$$1$$$.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, x, y; cin >> n >> x >> y;
int initial_y = y;
vector<int> chosen(x);
for(int& i: chosen) cin >> i;
sort(chosen.begin(), chosen.end());
int ans = x - 2;
int triangles_from_even_g = 0;
vector<int> odd_g;
auto process_gap = [&](int g) -> void{
if(g <= 1){
// already two apart
ans += g;
}
else if(g % 2 == 1){
odd_g.push_back(g / 2);
}
else{
triangles_from_even_g += g / 2;
}
};
for(int i = 0; i < x - 1; i++){
process_gap(chosen[i + 1] - chosen[i] - 1);
}
process_gap((chosen[0] + n) - chosen[x - 1] - 1);
sort(odd_g.begin(), odd_g.end());
for(int g: odd_g){
if(y >= g){
// all vertices are good, so we can make g + 1 triangles
ans += g + 1;
y -= g;
}
else{
ans += y;
y = 0;
break;
}
}
int even_triangles = min(triangles_from_even_g, y);
y -= even_triangles;
ans += even_triangles;
int used_vertices = initial_y - y;
ans += used_vertices;
cout << ans << "\n";
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int T = 1;
cin >> T;
while(T--) solve();
}
/* /\_/\
* (= ._.)
* / > \>
*/
1942D - Learning to Paint
Problem Credits: sum
Analysis: sum
We can solve this with dynamic programming. Let $$$\texttt{dp}[i]$$$ store a list of all $$$\min(k, 2^i)$$$ highest beauties of a painting in non-increasing order if you only paint cells $$$1,2,\ldots ,i$$$.
Transitioning boils down to finding the $$$k$$$ largest elements from $$$n$$$ non-increasing lists. Try to do this in $$$\mathcal{O}((n + k) \log n)$$$ time.
Let $$$\texttt{dp}[i]$$$ store a list of all $$$\min(k, 2^i)$$$ highest beauties of a painting in non-increasing order if you only paint cells $$$1,2,\ldots ,i$$$.
Let's define merging two lists as creating a new list that stores the $$$k$$$ highest values from the two lists.
First, let's look at a slow method of transitioning. We iterate over the left endpoint $$$l$$$ such that $$$l \ldots i$$$ is a maximal painted interval. At each $$$l$$$, we merge $$$\texttt{dp}[l - 2]$$$, with $$$a_{l,i}$$$ added to each value, into $$$\texttt{dp}[i]$$$. We also merge $$$\texttt{dp}[i - 1]$$$ into $$$\texttt{dp}[i]$$$ to handle the case in which we do not paint cell $$$i$$$. If implemented well, transitioning takes $$$\mathcal{O}(nk)$$$ time leading to an $$$\mathcal{O}(n^2k)$$$ solution which is too slow.
We can speed this up. This merging process boils down to finding the $$$k$$$ largest elements from $$$n$$$ non-increasing lists in $$$\mathcal{O}((n + k) \log n)$$$ time. We can use a priority queue that stores ($$$\texttt{element}, \texttt{index}$$$) for each list. We can do the following $$$k$$$ times: add the top of the priority queue to our answer, advance its index, and update the priority queue accordingly. This allows us to transition in $$$\mathcal{O}((n + k) \log n)$$$ time which leads to an $$$\mathcal{O}((n^2 + nk) \log n)$$$ solution.
Bonus: find an $$$\mathcal{O}(n^2 + k)$$$ solution.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, k;
cin >> n >> k;
int A[n + 1][n + 1];
for (int i = 1; i <= n; i++)
for (int j = i; j <= n; j++)
cin >> A[i][j];
// dp[i] = Answer if we consider 1...i
vector<int> dp[n + 1];
dp[0] = {0};
for (int i = 1; i <= n; i++){
priority_queue<array<int, 3>> pq;
// Don't create an interval
pq.push({dp[i - 1][0], i - 1, 0});
// Create interval j+2...i (transition from j)
for (int j = i - 2; j >= -1; j--){
pq.push({(j < 0 ? 0 : dp[j][0]) + A[j + 2][i], j, 0});
}
while (pq.size() and dp[i].size() < k){
auto [val, j, num] = pq.top(); pq.pop();
dp[i].push_back(val);
if (j < 0 or num + 1 >= dp[j].size())
continue;
// Don't create interval
if (j == i - 1)
pq.push({dp[i - 1][num + 1], i - 1, num + 1});
// Create interval j+2...i (transition from j)
else
pq.push({dp[j][num + 1] + A[j + 2][i], j, num + 1});
}
}
for (int i : dp[n])
cout << i << " ";
cout << "\n";
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
int tc;
cin >> tc;
while (tc--)
solve();
}
1942E - Farm Game
Problem Credits: cry
Analysis: sum
Think about the gaps between $$$(a_1,b_1), (a_2,b_2), ... ,(a_n,b_n)$$$. Let the gaps be $$$g_1,g_2,...,g_n$$$. What effect do the moves have on the gaps?
WLOG let the first cow be FJ's cow. FN wins if all $$$g_i$$$ are even. Otherwise FJ wins. Note that the game will always end.
Try to count the number of ways FN wins. We can iterate over the sum of $$$g_i$$$ and use stars and bars as well as other counting techniques.
Consider the gaps between $$$(a_1,b_1), (a_2,b_2), ... ,(a_n,b_n)$$$. In the example below, '$$$\texttt{a}$$$' represents FJ's cow, '$$$\texttt{b}$$$' represents FN's cow, and '$$$\texttt{.}$$$' represents an empty space. The gaps are $$$[2,3]$$$.
Consider the gaps $$$g_1,g_2,...,g_n$$$. When a farmer moves, they choose some non-empty subset of non-zero $$$g_i$$$ and add or subtract $$$1$$$ from every element in the subset (so long such move is possible). The game ends when some farmer cannot move (which implies that all $$$g_i$$$ must be $$$0$$$ when the game ends).
WLOG let the first cow be FJ's cow. The winner of the game is determined as follows:
1) If all $$$g_i$$$ are even, then FN wins. If FJ pushes the cows corresponding to gaps $$$i_1, i_2, \ldots, i_k$$$ some direction in a move, then $$$g_{i_1}, g_{i_2}, \ldots g_{i_k}$$$ will change by $$$1$$$ and become odd. FN can then push the cows corresponding to gaps $$$i_1, i_2, \ldots, i_k$$$ left so that $$$g_{i_1}, g_{i_2}, \ldots g_{i_k}$$$ will become even again. Therefore, all $$$g_i$$$ will be even when it's FJ's turn and at least one $$$g_i$$$ will be odd when it's FN's turn. Since the game ends when all $$$g_i$$$ are $$$0$$$ (which is even), then FJ will lose. Note that the game will always end. This is since the sum of positions of FN's cows will always be decreasing since FN will always push some cow left.
2) Otherwise, FJ wins since he can subtract $$$1$$$ to all odd $$$g_i$$$ in his first move to make all elements even. Then it's FN's turn and all $$$g_i$$$ are even so by the reasoning above, FJ wins.
Let's use complementary counting and determine the number of ways FN wins (which is when all $$$g_i$$$ are even). We can iterate over the sum, $$$s$$$ ($$$s$$$ is even), of all $$$g_i$$$. At each $$$s$$$, we can use stars and bars to find how many such $$$g_1,g_2,...,g_n$$$ (all $$$g_i$$$ are even) sum to $$$s$$$. Specifically, the number of ways are $$$\frac{s}{2} + n - 1 \choose n - 1$$$. The number of ways to place the cows in the available positions given the gaps will be $$$2 \cdot {l - s - n \choose n}$$$. We multiply by $$$2$$$ since we can alternate starting with either FJ's cow or FN's cow.
Finally, we subtract this result from the total number of configurations, $$$2 \cdot {l \choose 2 \cdot n}$$$, to get the number of ways FJ wins. This runs in $$$\mathcal{O}(l)$$$ time.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const int MAX = 2e6 + 5, MOD = 998244353;
ll fact[MAX], ifact[MAX];
ll bpow(ll a, int p){
ll ans = 1;
for (;p; p /= 2, a = (a * a) % MOD)
if (p & 1)
ans = (ans * a) % MOD;
return ans;
}
ll ncr(int n, int r){
if (n < 0)
return 0;
if (r > n)
return 0;
return fact[n] * ifact[r] % MOD * ifact[n - r] % MOD;
}
void solve(){
int l, n;
cin >> l >> n;
ll all_even = 0;
for (int s = 0; s <= l; s += 2){
all_even += 2 * ncr(s / 2 + n - 1, n - 1) % MOD * ncr(l - s - n, n) % MOD;
all_even %= MOD;
}
cout << (2 * ncr(l, 2 * n) % MOD - all_even + MOD) % MOD << "\n";
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
for (int i = 0; i < MAX; i++)
fact[i] = !i ? 1 : fact[i - 1] * i % MOD;
for (int i = MAX - 1; i >= 0; i--)
ifact[i] = (i == MAX - 1) ? bpow(fact[MAX - 1], MOD - 2) : ifact[i + 1] * (i + 1) % MOD;
int tc;
cin >> tc;
while (tc--)
solve();
}
1942F - Farmer John's Favorite Function
Problem Credits: sum
Analysis: sum
Consider the case where all $$$f(i)$$$ are integers. Decreasing any $$$a_i$$$ will decrease $$$\lfloor f(n) \rfloor$$$. So solutions that iterate over the last few elements will not work.
If $$$n \ge 6$$$, changing $$$a_1$$$ will only affect $$$\lfloor f(n) \rfloor$$$ by at most $$$1$$$.
We can take the floor each time we square root. Specifically, we can define $$$f$$$ as:
- $$$f(1) = \lfloor \sqrt{a_1} \rfloor$$$
- For all $$$i > 1$$$, $$$f(i) = \lfloor \sqrt{f(i-1)+a_i} \rfloor$$$
This allows us to work with integers.
We can divide our array into blocks of size $$$b \ge 6$$$ (note that the value of $$$b$$$ depends on the solution). How can we use hint $$$2$$$? What should we store for each block?
Let's first look at the impact of earlier numbers on the final result. Earlier numbers can still influence $$$\lfloor f(n) \rfloor$$$ when $$$n$$$ is large. Suppose that all $$$f(i)$$$ are integers. Then decreasing any $$$a_i$$$ will decrease $$$\lfloor f(n) \rfloor$$$.
However, it is clear that the impact of earlier number on $$$\lfloor f(n) \rfloor$$$ is still extremely small. A key observation is that when $$$n \ge 6$$$, changing $$$a_1$$$ will only affect $$$\lfloor f(n) \rfloor$$$ by at most $$$1$$$.
Another observation is that we can take the floor each time we square root. Specifically, we can define $$$f$$$ as:
- $$$f(1) = \lfloor \sqrt{a_1} \rfloor$$$
- For all $$$i > 1$$$, $$$f(i) = \lfloor \sqrt{f(i-1)+a_i} \rfloor$$$
This allows us to work with integers. From here on, we work with this new definition of $$$f$$$.
Let's divide our array into blocks of size $$$b=6$$$. We can append zeroes to the front of the array to make $$$n$$$ a multiple of $$$b$$$.
Consider some block representing range $$$[l,r]$$$. Let's consider the subarray $$$a_l,a_{l+1},\ldots ,a_r$$$. Let $$$v$$$ be the value of $$$f(r)$$$ if we let $$$f(l - 1)=0$$$. Using our first observation, we know that $$$f(r)$$$ will be either $$$v$$$ or $$$v+1$$$ depending on $$$f(l - 1)$$$. So let's find the smallest value $$$c$$$ such that if $$$f(l - 1)=c$$$, then $$$f(r)=v+1$$$. This can be done by iterating over the elements of the block backwards.
For each block, we store its corresponding ($$$v,c$$$). We can build a segment tree over the blocks for an $$$\mathcal{O}((n + q) \log n)$$$ solution.
Alternatively, we can do square root decomposition by having $$$b=\sqrt n$$$ which leads to an $$$\mathcal{O}(n + q \sqrt n)$$$ solution (in practice, we should set $$$b$$$ to something small like $$$100$$$).
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define pll pair<ll, ll>
const int B = 6, MAX = 2e5 + B + 5;
int n, q, offset, numBlocks; ll A[MAX]; pll seg[MAX];
pll comb(pll a, pll b){
auto [vl, cl] = a;
auto [vr, cr] = b;
// Not defined
if (vl == -1)
return b;
if (vr == -1)
return a;
// Doesn't cross cutoff
if (vl < cr - 1)
return {vr, (ll)2e18};
if (vl >= cr)
return {vr + 1, (ll)2e18};
// If vl == cr - 1
return {vr, cl};
}
void updSeg(int i, pll v){
seg[i += numBlocks] = v;
for (i /= 2; i > 0; i /= 2)
seg[i] = comb(seg[i * 2], seg[i * 2 + 1]);
}
ll qry(int l = 0, int r = numBlocks - 1){
pll retL = {-1, -1};
pll retR = {-1, -1};
for (l += numBlocks, r += numBlocks; l <= r; r /= 2, l /= 2){
if (l % 2 == 1)
retL = comb(retL, seg[l++]);
if (r % 2 == 0)
retR = comb(seg[r--], retR);
}
return comb(retL, retR).first;
}
void updBlock(int blk){
int l = blk * B;
int r = l + B - 1;
ll val = 0;
for (int i = l; i <= r; i++)
val = floor(sqrtl((long double)val + A[i]));
ll req = val + 1;
for (int i = r; i >= l; i--){
if (req > 2e9){
updSeg(blk, {val, (ll)2e18});
return;
}
req = req * req - A[i];
}
updSeg(blk, {val, req});
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
cin >> n >> q;
offset = (B - n % B) % B;
n += offset;
numBlocks = n / B;
for (int i = offset; i < n; i++)
cin >> A[i];
for (int b = 0; b < numBlocks; b++)
updBlock(b);
while (q--){
ll k, x;
cin >> k >> x;
k--;
k += offset;
A[k] = x;
updBlock(k / B);
cout << qry() << "\n";
}
}
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const int B = 100, MAX = 2e5 + B + 5;
int n, q, offset, numBlocks; ll A[MAX], val[MAX], cut[MAX];
void buildBlock(int blk){
int l = blk * B;
int r = l + B - 1;
ll cur = 0;
for (int i = l; i <= r; i++)
cur = floor(sqrtl((long double)cur + A[i]));
val[blk] = cur;
ll req = cur + 1;
for (int i = r; i >= l; i--){
if (req > 2e9){
cut[blk] = 2e18;
return;
}
req = req * req - A[i];
}
cut[blk] = req;
}
ll qry(){
ll cur = 0;
for (int b = 0; b < numBlocks; b++)
cur = (cur >= cut[b]) ? val[b] + 1 : val[b];
return cur;
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
cin >> n >> q;
offset = (B - n % B) % B;
n += offset;
numBlocks = n / B;
for (int i = offset; i < n; i++)
cin >> A[i];
for (int b = 0; b < numBlocks; b++)
buildBlock(b);
while (q--){
ll k, x;
cin >> k >> x;
k--;
k += offset;
A[k] = x;
buildBlock(k / B);
cout << qry() << "\n";
}
}
1942G - Bessie and Cards
Problem Credits: smax
Analysis: smax
"Draw $$$1$$$" cards do not matter because we can immediately play them when we draw them.
Treat "draw $$$2$$$" as $$$+1$$$ and "draw $$$0$$$" as $$$-1$$$. We start with a balance of $$$+5$$$. We draw all the cards up to the first prefix where the balance dips to $$$0$$$. We are interested in the number of ways where the special cards lie in this prefix.
Read the hints.
Treat the special cards as "draw $$$0$$$" cards and multiply by the appropriate binomial coefficient at the end. Also treat cards of the same type as indistinguishable and multiply by the appropriate factorials at the end.
Enumerate the length of the prefix where the balance first hits $$$0$$$. This dictates how many $$$+1$$$ and $$$-1$$$ we have. Now we want to count the number of ways of arranging $$$+1$$$ and $$$-1$$$ such that the balance never dips to $$$0$$$, assuming we start with $$$+5$$$.
To solve this subproblem, we can draw inspiration from the path counting interpretation for counting balanced bracket sequences. We have $$$n$$$ open and $$$n$$$ close brackets and want to count the number of balanced bracket sequences starting with ((((
(note that there are only $$$4$$$ instead of $$$5$$$ open brackets to shift from all balances being strictly positive to all balances being non-negative). The number of sequences ignoring the balance constraint is $$$\binom{2n-4}{n}$$$. Any bracket sequence where some balance dips negative corresponds to a path that walks above the line $$$y = x$$$. For those paths, we reflect over the line $$$y = x + 1$$$ at the first point where it steps over the line. So there is a bijection between these paths and paths that reach the point $$$(n-1,n+1)$$$, of which there are $$$\binom{2n-4}{n+1}$$$. So the total number of ways is $$$\binom{2n-4}{n} - \binom{2n-4}{n+1}$$$.
Our final answer is summing up the number of ways over all prefix lengths. Make sure to handle the case where you draw the entire deck correctly. The complexity is $$$\mathcal O(\min(a,c))$$$.
1942H - Farmer John's Favorite Intern
Problem Credits: oursaco
Analysis: oursaco / rainboy
Model this as a flow problem where growth events are an input of some flow to a node and harvest events and the necessary amount of fruits are an output of flow to the sink. We will define the $$$a_i$$$ edge to be the input edge of every node, $$$b_i$$$ to be the output edge for necessary fruits, and $$$c_i$$$ to be the output edge for growth events. The connections for each individual node will look like:
$$$(\text{source}, a_i, \text{# of growth events})$$$
$$$(a_i, \text{temporary node}, \text{INF})$$$
$$$(\text{temporary node}, b_i, \text{INF})$$$
$$$(\text{temporary node}, c_i, \text{INF})$$$
$$$(b_i, \text{sink}, \text{# of required peaches})$$$
$$$(c_i, \text{sink}, \text{# of harvest events})$$$
To simulate growth events affecting the subtree, every temporary node has infinite capacity directed edges down to the temporary nodes of its children. To simulate growth events affecting the parent, every temporary node has an infinite capacity directed edge to its parent's $$$b_i$$$. To simulate harvest events affecting subtree, every node has infinite capacity directed edges to all its ancestors' $$$c_i$$$ nodes. Lets try to find the min cut of the graph. If we don't put cut some node's $$$a_i$$$, then we must cut both $$$b_i$$$ and $$$c_i$$$ for all of its descendants. If we don't cut some node's $$$a_i$$$, then we also must cut $$$c_i$$$ for all its ancestors and $$$b_i$$$ for its parent. Thus, we can model a $$$dp[i][0/1/2]$$$ to know if we are currently cutting the a_i input edge, cutting the b_i and c_i output edge, or cutting the a_i edge but there is some node in the subtree where we did not cut the a_i edge, in which case we need to cut the c_i edge as well. Transitions look like:
$$$dp[i][0] = a_i + \sum_j dp[j][0]$$$
$$$dp[i][1] = b_i + c_i + \sum_j dp[j][1]$$$
$$$dp[i][2] = a_i + c_i + \min(\sum_j \min(dp[j][0], dp[j][2]), b_i + \sum_j \min(dp[j][0], dp[j][1], dp[j][2]))$$$ where at least one child is a $$$dp[j][2]$$$ or $$$dp[j][1]$$$
$$$dp[i][2]$$$ can be rewritten as:
$$$dp[i][2] = a_i + c_i + \min(\min(dp[j][2] - \min(dp[j][0], dp[j][2])) + \sum_j \min(dp[j][0], dp[j][2]),$$$ $$$\min(dp[j][1] - \min(dp[j][0], dp[j][1], dp[j][2])) + b_i + \sum_j \min(dp[j][0], dp[j][1], dp[j][2])))$$$
Use dynamic dp by representing dp transitions as a matrix where all the light children dp values are known and the heavy child dp values are variable. Then, we can accelerate each chain in the heavy light decomposition of the tree. To get the matrix representation of each transition to the heavy child, there are 5 cases when considering a $$$dp[j][0/1/2]$$$ -> $$$dp[j][2]$$$ transiton where $$$dp[j][0]$$$ is the min, $$$dp[j][1]$$$ is the min, and $$$dp[j][2]$$$ is the min, $$$dp[j][2] - \min(dp[j][0], dp[j][2])$$$ is the optimal added value, or $$$dp[j][0] - min(dp[j][0], dp[j][1], dp[j][2])$$$, is the optimal added value. For each cell in the matrix, put the min of the $$$5$$$ cases.
Since you will only ever change hld chains $$$\log n$$$ times, the total complexity will be $$$n \log^2 n \cdot 3^3$$$. If we use balanced hld or other $$$n \log n$$$ hld techniques, then the complexity becomes $$$n \log^2 n$$$. Both solutions should pass under the given constraints.
#ifndef LOCAL
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("avx,avx2,fma")
#pragma GCC target("sse4,popcnt,abm,mmx,tune=native")
#endif
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
#define pb push_back
#define ff first
#define ss second
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
typedef pair<ld, ld> pld;
const int INF = 1e9;
const ll LLINF = 1e18;
const int MOD = 1e9 + 7;
template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>;
inline ll ceil0(ll a, ll b) {
return a / b + ((a ^ b) > 0 && a % b);
}
void setIO() {
ios_base::sync_with_stdio(0); cin.tie(0);
}
struct Matrix {
array<array<ll, 3>, 3> mat;
array<ll, 3> &operator[](int x){
return mat[x];
}
const array<ll, 3> &operator[](int x) const {
return mat[x];
}
Matrix(){
for(int i = 0; i < 3; i++) for(int j = 0; j < 3; j++) mat[i][j] = 0;
}
Matrix operator*(const Matrix &x){
Matrix ret;
for(int i = 0; i < 3; i++) for(int j = 0; j < 3; j++) ret[i][j] = LLINF;
for(int i = 0; i < 3; i++){
for(int k = 0; k < 3; k++){
for(int j = 0; j < 3; j++){
ret[i][j] = min(ret[i][j], mat[i][k] + x[k][j]);
}
}
}
return ret;
}
Matrix &operator*=(const Matrix &x){
(*this) = (*this) * x;
return *this;
}
};
struct fastset {
priority_queue<ll, vector<ll>, greater<ll>> q, rem;
void insert(ll x){
q.push(x);
}
void erase(ll x){
rem.push(x);
}
ll query(){
while(rem.size() && q.top() == rem.top()){
rem.pop();
q.pop();
}
return q.top();
}
void clear(){
while(q.size()) q.pop();
while(rem.size()) rem.pop();
}
};
vector<int> g[500005];
int sub[500005];
int head[500005];
int par[500005];
int tim = 0;
vector<int> chain[500005];
int weight[500005];
void dfs1(int x){
sub[x] = 1;
for(int &i : g[x]){
dfs1(i);
sub[x] += sub[i];
if(sub[i] > sub[g[x][0]]) swap(i, g[x][0]);
}
}
int in[500005], out[500005];
int dir[500005];
void dfs2(int x, int p){
chain[head[x]].pb(x);
par[x] = p;
weight[x] = 1;
in[x] = tim++;
for(int i : g[x]){
head[i] = (i == g[x][0] ? head[x] : i);
dfs2(i, x);
if(i != g[x][0]) weight[x] += sub[i];
}
out[x] = tim - 1;
}
Matrix dp[500005];
Matrix cum[500005];
int rt[500005];
int left0[500005], right0[500005], ppar[500005];
int dnq(int l, int r, int p, const vector<int> &v){
if(l > r) return 0;
int sum = 0;
for(int i = l; i <= r; i++) sum += weight[v[i]];
int cur = 0;
int mid = -1;
for(int i = l; i <= r; i++){
cur += weight[v[i]];
if(cur >= sum/2){
mid = i;
break;
}
}
if(p == -1) ppar[v[mid]] = v[mid];
else ppar[v[mid]] = p;
left0[v[mid]] = dnq(l, mid - 1, v[mid], v);
right0[v[mid]] = dnq(mid + 1, r, v[mid], v);
return v[mid];
}
void merge(int x){
if(right0[x] && left0[x]){
cum[x] = cum[left0[x]]*dp[x]*cum[right0[x]];
} else if(left0[x] && !right0[x]){
cum[x] = cum[left0[x]]*dp[x];
} else if(right0[x] && !left0[x]){
cum[x] = dp[x]*cum[right0[x]];
} else {
cum[x] = dp[x];
}
}
void pull(int x){
while(x != rt[x]){
merge(x);
x = ppar[x];
}
merge(x);
}
ll a[500005], b[500005], c[500005];
ll dp0[500005][3];
void dfs3(int x){
dp0[x][0] = a[x];
dp0[x][2] = b[x] + c[x];
ll deg1 = a[x] + c[x], deg2 = b[x] + a[x] + c[x];
ll mn1 = LLINF, mn2 = LLINF;
for(int i : g[x]){
dfs3(i);
dp0[x][0] += dp0[i][0];
dp0[x][2] += dp0[i][2];
deg1 += min(dp0[i][0], dp0[i][1]);
deg2 += min({dp0[i][0], dp0[i][1], dp0[i][2]});
mn1 = min(mn1, max((ll)0, dp0[i][1] - dp0[i][0]));
mn2 = min(mn2, max((ll)0, min(dp0[i][1], dp0[i][2]) - dp0[i][0]));
}
dp0[x][1] = min(deg1 + mn1, deg2 + mn2);
}
fastset deg1[500005], deg2[500005];
ll sum0[500005], sum2[500005], sum01[500005], sum012[500005];
void rebuild(int x){
dp[x][0][0] = a[x] + sum0[x];
dp[x][0][1] = LLINF;
dp[x][0][2] = LLINF;
if(g[x].size()){
dp[x][1][0] = min(a[x] + b[x] + c[x] + deg2[x].query() + sum012[x], a[x] + c[x] + deg1[x].query() + sum01[x]);
dp[x][1][1] = min(a[x] + c[x] + sum01[x], a[x] + b[x] + c[x] + sum012[x]);
dp[x][1][2] = a[x] + b[x] + c[x] + sum012[x];
} else {
dp[x][1][0] = dp[x][1][1] = dp[x][1][2] = LLINF;
}
dp[x][2][0] = LLINF;
dp[x][2][1] = LLINF;
dp[x][2][2] = b[x] + c[x] + sum2[x];
}
ll eval(int x, int ind){
return min({cum[rt[x]][ind][0], cum[rt[x]][ind][1], cum[rt[x]][ind][2]});
}
void rem(int x){
while(head[x] != 1){
if(x == head[x]){
ll v0 = eval(x, 0), v1 = eval(x, 1), v2 = eval(x, 2);
deg1[par[x]].erase(max((ll)0, v1 - v0));
deg2[par[x]].erase(max((ll)0, min(v1, v2) - v0));
sum012[par[x]] -= min({v0, v1, v2});
sum01[par[x]] -= min(v0, v1);
sum0[par[x]] -= v0;
sum2[par[x]] -= v2;
x = par[x];
}
x = head[x];
}
}
void add(int x){
while(head[x] != 1){
if(x == head[x]){
ll v0 = eval(x, 0), v1 = eval(x, 1), v2 = eval(x, 2);
deg1[par[x]].insert(max((ll)0, v1 - v0));
deg2[par[x]].insert(max((ll)0, min(v1, v2) - v0));
sum012[par[x]] += min({v0, v1, v2});
sum01[par[x]] += min(v0, v1);
sum0[par[x]] += v0;
sum2[par[x]] += v2;
rebuild(par[x]);
pull(par[x]);
x = par[x];
}
x = head[x];
}
}
int main(){
setIO();
int t;
cin >> t;
while(t--){
int n, q;
cin >> n >> q;
for(int i = 2; i <= n; i++){
int x;
cin >> x;
g[x].pb(i);
}
ll sum = 0;
for(int i = 1; i <= n; i++) cin >> b[i], sum += b[i];
dfs1(1);
head[1] = 1;
dfs2(1, 1);
dfs3(1);
for(int i = 1; i <= n; i++){
if(chain[i].size()){
int p = dnq(0, chain[i].size() - 1, -1, chain[i]);
for(int j : chain[i]) rt[j] = p;
}
}
for(int i = 1; i <= n; i++){
deg1[i].insert(LLINF);
deg2[i].insert(LLINF);
for(int j : g[i]){
if(head[j] == head[i]) continue;
deg1[i].insert(max((ll)0, dp0[j][1] - dp0[j][0]));
deg2[i].insert(max((ll)0, min(dp0[j][1], dp0[j][2]) - dp0[j][0]));
sum012[i] += min({dp0[j][0], dp0[j][1], dp0[j][2]});
sum01[i] += min(dp0[j][0], dp0[j][1]);
sum0[i] += dp0[j][0];
sum2[i] += dp0[j][2];
}
rebuild(i);
pull(i);
}
while(q--){
int t, x, v;
cin >> t >> x >> v;
rem(x);
if(t == 1) a[x] += v;
else c[x] += v, sum += v;
rebuild(x);
pull(x);
add(x);
cout << (min({eval(1, 0), eval(1, 1), eval(1, 2)}) == sum ? "YES" : "NO") << "\n";
}
tim = 0;
for(int i = 1; i <= n; i++){
a[i] = b[i] = c[i] = 0;
g[i].clear();
chain[i].clear();
deg1[i].clear();
deg2[i].clear();
sum0[i] = sum2[i] = sum01[i] = sum012[i] = 0;
}
}
}
There also exists a solution without flows. Thanks to rainboy for discovering it during testing!
Lets greedily assign the growth events. We will always try to use the growth events to satisfy harvest events or required peaches in its subtree, and if there is any excess then we give it to the parent's required peaches. Similarly, we will always use harvest events on growth events in its subtree. Lets define a $$$dp[u]$$$ for each node $$$u$$$. If $$$dp[x] > 0$$$ then it means we can give $$$dp[u]$$$ growths to $$$u$$$'s parent. Otherwise, it means $$$dp[u]$$$ requires $$$-dp[u]$$$ growths from its ancestors. Let $$$pos_u$$$ denote the sum of positive $$$dp[v]$$$ where $$$v$$$ is a child a of $$$u$$$, and $$$neg_u$$$ denote the sum of negative $$$dp[v]$$$. First we used the excess growth to satisfy the required peaches, so we can set $$$b_u = \max(0, b_u - pos_u)$$$. Then we distribute the growth events at node $$$u$$$ to satisfy the requirements in the subtree. Define $$$a_i$$$ as the number of growth events on node $$$i$$$ and $$$c_i$$$ as the number of harvest events on node $$$i$$$. To account for the harvest events at node $$$i$$$, we can define $$$bal_u = \text{sum of } a_i \text{ in u's subtree} - \text{sum of } b_i \text{ in u's subtree} - \text{sum of } c_i \text{ in u's subtree}$$$. We know that $$$dp[u] \le bal_u$$$, and as it turns out, as long as we keep $$$bal_u$$$ as an upper bound on $$$dp[u]$$$, the harvest events will always be satisfied. Why? $$$a_u + neg_u - \max(0, b_u - pos_u)$$$ is the assignment of growth events without considering harvest events. The amount of unused growth events in the subtree of $$$u$$$ is given by $$$bal_u - (a_u + neg_u - \max(0, b_u - pos_u)) + c_u$$$. If $$$a_u + neg_u - \max(0, b_u - pos_u) \le bal_u$$$, then it means we can use all of our harvest events on unused growth events in the subtree. If $$$a_u + neg_u - \max(0, b_u - pos_u) > bal_u$$$, then it means we will use all of our harvest events but still have $$$bal_u - (a_u + neg_u - \max(0, b_u - pos_u))$$$ harvest events remaining. Thus, we can formulate our dp as $$$dp[u] = \min(a_u + neg_u - \max(0, b_u - pos_u), bal_u)$$$. Let $$$v$$$ be the heavy child of $$$u$$$ and $$$pos'_u$$$ be the sum of positive dp excluding $$$v$$$ and $$$neg'_u$$$ be the sum of positive dp excluding $$$v$$$. Then, we can rewrite our dp as $$$dp[u] = \min(dp[v] + a_u + neg'_u - \max(0, b_u - pos'_u), a_u + neg'_u, bal_u)$$$. Define $$$x_u = a_u + neg'_u - \max(0, b_u - pos'_u)$$$ and $$$y_u = a_u + neg'_u$$$. Then $$$dp[u] = \min(dp[v] + x_u, y_u, bal_u)$$$. This can be accelerated by representing the transition as a matrix and accelerating it using dynamic dp. However, we can also notice that the dp can be represented as the smallest $$$x_1 + x_2 + \ldots + x_{i - 1} + \min(y_i, bal_i)$$$ over all prefixes of each heavy light decomposition chain. Thus, we can maintain the dp by using a range add and prefix min segment tree.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define N 300000
#define N_ (1 << 19) /* N_ = pow2(ceil(log2(N))) */
#define INF 0x3f3f3f3f3f3f3f3fLL
long long min(long long a, long long b) { return a < b ? a : b; }
long long max(long long a, long long b) { return a > b ? a : b; }
int *ej[N], eo[N], n;
void append(int i, int j) {
int o = eo[i]++;
if (o >= 2 && (o & o - 1) == 0)
ej[i] = (int *) realloc(ej[i], o * 2 * sizeof *ej[i]);
ej[i][o] = j;
}
int dd[N], pp[N], qq[N], jj[N], ta[N], tb[N], qu[N];
int bb[N]; long long aa[N], zz[N], zzp[N], zzn[N];
int dfs1(int i, int d) {
int o, s, k_, j_;
dd[i] = d;
s = 1, k_ = 0, j_ = -1;
zz[i] = -bb[i];
for (o = eo[i]; o--; ) {
int j = ej[i][o], k = dfs1(j, d + 1);
s += k;
if (k_ < k)
k_ = k, j_ = j;
zz[i] += zz[j];
}
qq[i] = j_, jj[i] = j_ == -1 ? i : jj[j_];
return s;
}
int t_;
void dfs2(int i, int q) {
int o, j_;
qu[ta[i] = t_++] = i;
j_ = qq[i], qq[i] = q;
if (j_ != -1)
dfs2(j_, q);
for (o = eo[i]; o--; ) {
int j = ej[i][o];
if (j != j_)
dfs2(j, j);
}
tb[i] = t_;
}
long long stx[N_ * 2], sty[N_ * 2], stz[N_ * 2], lz[N_]; int h_, n_;
void put(int i, long long x) {
stz[i] += x;
if (i < n_)
lz[i] += x;
}
void pus(int i) {
if (lz[i])
put(i << 1 | 0, lz[i]), put(i << 1 | 1, lz[i]), lz[i] = 0;
}
void pul(int i) {
if (!lz[i]) {
int l = i << 1, r = l | 1;
stx[i] = stx[l] + stx[r];
sty[i] = min(sty[l], stx[l] + sty[r]);
stz[i] = min(stz[l], stx[l] + stz[r]);
}
}
void push(int i) {
int h;
for (h = h_; h > 0; h--)
pus(i >> h);
}
void pull(int i) {
while (i > 1)
pul(i >>= 1);
}
void build() {
int i, j, a, o;
h_ = 0;
while (1 << h_ < n)
h_++;
n_ = 1 << h_;
memset(stx, 0, n_ * 2 * sizeof *stx);
memset(sty, 0, n_ * 2 * sizeof *sty);
memset(stz, 0, n_ * 2 * sizeof *stz);
memset(lz, 0, n_ * sizeof *lz);
memset(aa, 0, n * sizeof *aa);
memset(zzp, 0, n * sizeof *zzp);
memset(zzn, 0, n * sizeof *zzn);
for (i = 0; i < n; i++) {
a = ta[i];
stz[n_ + a] = zz[i];
for (o = eo[i]; o--; ) {
j = ej[i][o];
if (qq[j] == j)
zzn[i] += -zz[j];
}
stx[n_ + a] = -zzn[i] - bb[i];
sty[n_ + a] = -zzn[i];
}
for (i = n_ - 1; i > 0; i--)
pul(i);
}
void upd(int i) {
int i_ = n_ + ta[i];
push(i_);
stx[i_] = aa[i] - zzn[i] - max(bb[i] - zzp[i], 0);
sty[i_] = aa[i] - zzn[i];
pull(i_);
}
void update_z(int l, int r, long long x) {
int l_ = l += n_, r_ = r += n_;
push(l_), push(r_);
for ( ; l <= r; l >>= 1, r >>= 1) {
if ((l & 1) == 1)
put(l++, x);
if ((r & 1) == 0)
put(r--, x);
}
pull(l_), pull(r_);
}
long long query(int l, int r) {
long long xl, yl, zl, xr, yr, zr;
push(l += n_), push(r += n_);
xl = 0, yl = INF, zl = INF, xr = 0, yr = INF, zr = INF;
for ( ; l <= r; l >>= 1, r >>= 1) {
if ((l & 1) == 1) {
yl = min(yl, xl + sty[l]), zl = min(zl, xl + stz[l]), xl += stx[l];
l++;
}
if ((r & 1) == 0) {
yr = min(sty[r], yr == INF ? INF : stx[r] + yr), zr = min(stz[r], zr == INF ? INF : stx[r] + zr), xr += stx[r];
r--;
}
}
return min(min(xl + xr, min(yl, yr == INF ? INF : xl + yr)), min(zl, zr == INF ? INF : xl + zr));
}
int main() {
int t;
scanf("%d", &t);
while (t--) {
int q_, i, p, q;
scanf("%d%d", &n, &q_);
for (i = 0; i < n; i++)
ej[i] = (int *) malloc(2 * sizeof *ej[i]), eo[i] = 0;
pp[0] = -1;
for (i = 1; i < n; i++) {
scanf("%d", &pp[i]), pp[i]--;
append(pp[i], i);
}
for (i = 0; i < n; i++)
scanf("%d", &bb[i]);
dfs1(0, 0);
t_ = 0, dfs2(0, 0);
build();
while (q_--) {
int t, x;
scanf("%d%d%d", &t, &i, &x), i--;
if (t == 2)
x = -x;
if (t == 1)
aa[i] += x, upd(i);
while (i >= 0) {
q = qq[i], p = pp[q];
if (p >= 0) {
if (zz[q] > 0)
zzp[p] -= zz[q];
else
zzn[p] -= -zz[q];
}
update_z(ta[q], ta[i], x);
zz[q] = query(ta[q], ta[jj[q]]);
if (p >= 0) {
if (zz[q] > 0)
zzp[p] += zz[q];
else
zzn[p] += -zz[q];
upd(p);
}
i = p;
}
printf(zz[0] >= 0 ? "YES\n" : "NO\n");
}
for (i = 0; i < n; i++)
free(ej[i]);
}
return 0;
}
Thanks for the Lighting fast editorial
ah you beat me
It's a luck Sir
I don't understand. How do all of your comments get downvoted? They seem like pretty normal comments to me.
Because the original comment is unnecessary + kind of a cheeky way to exploit a human bias and get some upvotes.
The downvotes on the reply are mostly continuing the process of "punishing" the same guy. Maybe punish is a too strong word, rather discouraging by means of downvoting is the correct expression.
Bro you're everywhere lightning fast
second :(
C was pretty tough imo
How can you proof in F (the n>=6 observation)?
Since $$$a_i$$$ are bounded, I just bruteforced inserting bunch of 1e18's to see how far it propagates
Maybe it is the fact that the 64th root of $$$10^{18}$$$ is less than 2.
Because $$$\log_{2}\log_{2}n<6$$$.
I'm very disappointed with the editorial writers for not including this proof.
The most straightforward proof is obtained by considering the image of function $$$f(i, x)$$$ — the answer after considering $$$i$$$ elements, if $$$a_1 = x$$$.
$$$|f(i, [L; L + len])| <= |sqrt(len)|$$$.
This can be shown using the inequality $$$sqrt(a+x)-sqrt(x)<=sqrt(a)-sqrt(0)$$$, this is because sqrt is a concave function, this is a very intuitive claim, you can kind of guess this by looking at the sqrt(x) plot.
With this we can show that $$$|f(i, [0; 10^{18}])| <= 10^{18/2^i}$$$.
However this isn't a complete proof as the RHS will approach 1, but never actually reach it so it could be that $$$f(0) = x-eps, f(1) = x, f(10^{18}) = x+1$$$.
Let's say we already know that f(i, x) takes only 3 values, the thing is either $$$\left \lfloor{\sqrt{x}}\right \rfloor = \left \lfloor{\sqrt{x+1}}\right \rfloor$$$ or $$$\left \lfloor{\sqrt{x+1}}\right \rfloor = \left \lfloor{\sqrt{x+2}}\right \rfloor $$$, so $$$f(i+1, x)$$$ will only take 2 values.
or, more simply, we have that
$$$a+b\le a+b+2\sqrt{ab}=(\sqrt a+\sqrt b)^2$$$
$$$\implies\sqrt{a+b}\le\sqrt a+\sqrt b$$$
$$$\implies \sqrt{a+b}-\sqrt b\le\sqrt a.$$$
Then $$$\sqrt{c+\sqrt{b+a}}-\sqrt{c+\sqrt b}\le\sqrt{\sqrt{b+a}-\sqrt b}\le\sqrt{\sqrt a}=\sqrt[4] a.$$$
This extends similarly for more nested radicals. Thus, increasing $$$a_i$$$ by $$$x<10^{18}<2^{64}$$$ will only affect $$$f(i+6)$$$ by at most $$$\sqrt[2^6] x =\sqrt[64] x<2,$$$ the desired result.
I created video editorial for D: Learning to Paint
Practice Problem for building intuition for problems like today's D: Learning to Paint
I have added hints and thought process for D on CF Step.
Practice Problem for learning the technique of merging k-sorted list.
Man problem C was mega sexy... during contest it just gave my skills a serious reality check
for real
It humiliated me
I wonder to know why y is not used in problem C
B can also be solved backwards using the observation that the prefix mex is equal to the suffix min. This idea also appeared in the Cyclic Mex problem recently.
It took me 90mins to solve B :(
can u explain more this idea ?
the prefix mex is equal to the suffix min
If you have a sequence from 0 to n-1 (say). Then for any prefix of this sequence the mex will be the minimum element in the remainder of the sequence(suffix).
could you please explain " mex = min(mex, p[i]) " this line in your code. why we will take minnimum value betwenn mex and p[i] as a new mex value ?
A permutation contains all values from $$$0$$$ to $$$n - 1$$$. The definition of Mex is the first non-negative missing integer. If you consider the $$$i-th$$$ prefix, then which elements are missing from this prefix? All elements in the suffix $$$p[i + 1], \dots p[n - 1]$$$ are missing from it. By definition of mex, the minimum missing element should be the mex.
Thanks. Understood.
I just want to know the concept but still can't get the logic.how this is work.Can you just explain a little bit of this concept.please..
Can someone please explain the problem statement of D ?
You can choose some subset of integers on $$$ [1, n] $$$. The beauty of a subset is equal to the sum of scores of each maximal contiguous subarray of selected elements (e.g. you cannot extend the subarray more to the left or right). A score of a subarray $$$ [l, r] $$$ is given by $$$ a_{l, r} $$$. Find the $$$ k $$$ greatest beauties possible from the $$$ 2^n $$$ possible subsets (including duplicates).
I don't understand the statement of D either......
It's like they don't want you to comprehend it :<
they could've made it more clear in the Input or testcases description:(
I take back my words. I found it a lot more comprehensible after reading others comments.
Now it's simply that it is difficult:)
So many formulas in G :(((. Had the idea but probably bugged something there. Anyway, great problemset. It's funny that the "game" that problem E reduces to (by a series of really nice observations I'd say) (the game being subtract 1 from some number of piles) and the "paranthesis problem" that G reduces to both appear on cses in the problems Another Game and Bracket Sequences 2.
By looking at $$$f$$$ as a big integer in a weird changing base, F can be quite easily solved with a modified version of a Trygub Number (Big integer with negative digits)
254189455
Is this the same Trygub as antontrygub ?
Yes.
https://codeforces.me/blog/entry/115626?#comment-1026456
Quoting problem E's editorial:
Pardon me for being in the dark but why is this true? I thought it should be $$$l - s - 2n + 1$$$, since for each known set $$$(g_1,g_2,...,g_n)$$$, isn't it that the area of cows and inner gaps forms a solid block of $$$2n + s$$$ cells, and we would just need to find the starting place to put that block in?
We are placing the location of each g, so each g counts for one space while choosing locations, which contributes a total of n, so it will end up being l-s-n at the top
I am not sure where your +1 comes from
Wait a second, I got why I was wrong. My claim up above would only secure that $$$b_i - a_i = g_i + 1$$$, yet wrongly assume $$$a_{i+1} - b_i = 1$$$, which is not always the case.
Thanks for your help!
I could not solve beyond problem A. :(
Yeah bro I coud not even solve problem A. Forgot there was a contest :(
Happens, :)
Div.1+2 can be somewhat overwhelming to beginners, try some Div.3 problems, or AtCoder Beginner Contests.
Also, given the length/duration of the contest, if you didn't solve A, do reassess your approach. Did you play around with some small testcases to see patterns? Looking at the editorial, what could you have done to notice things sooner, etc.
I was able to solve problem A. But could not solve Problem B.
I tried to make some observations, but the concept of MEX was somewhat new to me. I only know what it does, not fully studied it's other details+implementations.
I do plan to upsolve and go through the editorial tomorrow.
Thank you.
Can you explain the time complexity of B if there is a while loop for increasing mex?
Consider the outer for loop.
For any given $$$i$$$, there would be at most 1 while-check for has[mex] that yields false (stopping the while loop).
Also, consider the number of times that the while-check for has[mex] can be true over all $$$i$$$ in total? It is also bounded by $$$n$$$, as any time the check is true, mex increases, and mex can only go up to $$$n$$$.
Thus, overall, the contribution of the considered while loop is $$$O(n)$$$
also think that the while loop doesn't start each time from the beginning it will continue from where it stopped before so the whole complexity will be N+N
You can also create a set containing the numbers 0 through $$$n$$$, and after using a number, delete it from the set. Set's first element will be the current MEX.The total complexity of these operations will be $$$O(n log n)$$$. Because set operations are $$$O(log n)$$$.
For problem C (hard):
I think it should be "whether $$$g$$$ is odd oe even, we can choose $$$\lfloor\frac{g}{2}\rfloor$$$ vertices to make $$$g$$$ extra triangles."
For example, for a pentagon, if we have chosen $$$a=1$$$ and $$$b=5$$$, in which case $$$g=3$$$, then we can choose $$$\lfloor\frac{3}{2}\rfloor=1$$$ vertex ($$$3$$$) to make $$$3$$$ extra triangles ($$$123$$$, $$$135$$$, $$$134$$$). For a hexagon, if we have chosen $$$a=1$$$ and $$$b=6$$$, then we can choose $$$2$$$ vertices($$$3$$$ and $$$4$$$) to make $$$4$$$ extra triangles ($$$123$$$, $$$134$$$, $$$146$$$, $$$145$$$).
I wasn't counting the first case, solely the second case where the extra triangle shares one side with the $$$x$$$-polygon and two with the outer.
Oh, I have just understand that. Thanks.
My solution on problem D is kinda different and I have no idea why it works
Basically you do as editorial says: dp[i] stores the best k values among the prefix [1...i]
The bruteforce approach: iterate through the previous lists and see if you can get a better list by binary search. You are changing the last p values with the first p values and then you sort the list
The optimized version: instead of doing for every index the updating, you are inducing a order to do so: sort them by the best value of list j + current cost induced by the ith element. And then you do like the bruteforce approach.
I don't know if the tests are weak. It is 2-3 times slower then editorial approach.
254190056
It seems that I used a similar solution to yours, the only difference is that you used binary_search but I used priority_queue. I am also seeking hack data for my solution. 254165883
Hacked with the same test case as in this comment.
Can anyone explain the time complexity for the B solution code? How does iterating over the HAS array impact the overall time complexity?
it's O(n) as we are traversing each element of P and has array only once
Can you please explain ?
as variable mex is all increasing you are going each index of has array only one. sorry i cant explain better than this may be you read this or watch some tutorial about time and space complexity
A reminder to focus more on geometry .
Thanks for an Amazing round and a wonderful tutorial, Can someone help me with my code in problem D, I believe I have the same idea with very similar complexity O(nk log(k)). However, it got TLE and I can't figure out why... Here's my submission: 254196468
Your complexity is $$$O(N \cdot k^2 \cdot log(k))$$$.
For any index $$$i$$$, you are iterating over all $$$j < i$$$, which contributes $$$O(n)$$$.
For each each index $$$j$$$, you are iterating over its complete multiset, which contributes $$$O(k*log(k))$$$ for each such $$$j$$$.
Oh, thanks, do you have any idea how to speed it up?
Yes, it's a famous interview problem. Given $$$k$$$ sorted lists, how do you merge them?
The trick is to insert the indices of all the lists in a max heap, with a custom comparator that assigns highest priority to the index with the highest $$$a[ind]$$$.
Then, you pop $$$k$$$ times from this modified heap. The time complexity is now $$$O(k*log(k*n))$$$
Standard Problem Link
Thank you, thank you, I'm both glad for almost having the solution and disappointed for missing it for a standard problem. Thanks again
Almost the same code, What a miss! 254211857
Just to clarify the process my solution goes with:
1- I assumed there's a source vertex 0 and a sink vertex 2*n+4
2- I assumed each node has two copies one that can start a segment and one that can end a segment, and the edges from nodeStart to nodeEnd have costs from the 2d array A.
3- Now we DFS from the vertex and calculate the DP of the highest K paths to the sink.
I probably found the most interesting sol for the Question A, even idk why i did it -_- ;) ----------------------254136686--------------------------------------------------------
include
include
include
using namespace std;
void solve() { int n,k; cin>>n>>k; int arr[n]; int res=1; if(k==1||k==n){ for(int i=1;i<=n;i++){ arr[i]=res; // cout<<res<<" "<<i<<endl; if(i%k==0){ res++; } } for(int i=1;i<=n;i++){ cout<<arr[i]<<" "; }cout<<endl; }else{ cout<<-1<<endl; }
}
int main(){ int t; cin >> t; while (t--) { solve(); } return 0; }
lol I misread and tried to solve G for cards of types 1, 2, 3 instead of 0, 1, 2
lol, then the probability of winning is 1, b/c everytime you play a card you draw some of them instead of that one.
now I realize I even missed the first free 5 cards xD my game starts with just 1 card and she'll lose in cases like "1 1 1 special" or "2 special special" xD
Did 3 after a long time.. :)
how about div 3 very soon ?
Loved problem C1 & C2.... thanks for such a beautiful question :)
Hello everyone, I used a weird algorithm to solve Problem D. 254165883
I believe that my solution is wrong since that it should have $$$O(n^2*k*log(k))$$$, but I used a trick to speed it up. I guess that there is certain hack input, but it is really hard to construct. I failed to construct a sample data to hack myself. Can somebody help me to do that? Thanks!
PS: Can somebody tell me how to insert collapsed code blocks? I think my code may be too long here.
Hacked with $$$a_{i, j} = w_i + w_{i+1} + \ldots + w_j$$$, where $$$w_i = n - i$$$ is the weight of cell $$$i$$$.
This way, the beauty of a painting is just the sum of weights of individual painted cells. In the best paintings, a long prefix of cells should be painted. And as you go in increasing order of $$$j$$$, you discover better and better solutions, substituting the previously found ones.
Out of curiosity, I tried something awfully similar but with randomly shuffling previous j values before iterating on them. Is it possible to make a test that TLEs on my solution?
Solution: 254452137
I believe I did something very similar to your approach and it passed: 254663089. Not exactly sure why this still passes after your submission was hacked.
Nice problemset, thanks!
Can someone please help me to understand why my code is failing on the 30th test case in test 2 in the problem C2:
pleasae help me debug hard C https://codeforces.me/contest/1942/submission/254210357(nvm found the bug %2 was missing)
I believe my solution to D is $$$O(n^2+k)$$$. It uses Eppstein’s algorithm.254149567 (implementation from this blog)
Rating predictions are actually pretty accurate
cry
Can you provide the test cases for problem C2, I need test 2
My code fails at 30th test case in test 2
In your code, it seems that you just process the distances one by one. You have to process the even distances from small to large, and then the odd distance from small to large. Please try the follow test case (not official):
Answer: 12, Your output: 10
Thanks buddy, after I read the editorial I figured it out that I didn't spend time thinking about the difference odd and even creates just simply went with the solution, thanks!
can someone explain problem B in a little bit more detail please ?
U have to set
a[i] = MEX(p1, p2,...,pi) - p[i]
according to the problem statementRearrange it,
p[i] = MEX(p1, p2,...,pi) - a[i]
for each index i, you have two options:-
p[i] = MEX till index i-1
then MEX for index i will be next element not used so far to construct permutation array p which can be retrieved from a set that is initially containing every element from 0 to n-1
p[i] != MEX till index i-1
then MEX for index i will remain same as MEX for index i-1, which means
p[i] = MEX-a[i]
My code implementation:-
how did we come to a conclusion that mex will increase whenever there's a positive element coming and in case of negative it remains same..
There are only 2 things the MEX can do: increase or stay the same (it can never decrease since we are looking at larger and larger prefixes).
In the case of a positive difference, if the MEX stayed the same then it would have to be greater than the current value, which is impossible because that value had to appear earlier on in the prefix. Since the MEX is calculated on a permutation that can’t happen. So it has to increase.
In the case of a negative value, the MEX has to be less than the current value. But if it increased that means the current value changed its MEX, meaning its new MEX is at least (current value + 1) and it is actually greater. So it has to stay the same.
u mean a[i] instead of p[i] right ?
Hmm, I wrote a proof for the last line in the editorial: "you can show that P is unique," but I think a lot of that proof helps you understand the original problem too: https://codeforces.me/blog/entry/126942#comment-1135215.
In A's solution:
For k=1, the construction 1,2,3…n will always work because in any other cyclic shift, 2 will be before 1.
It should be
n will be before 1.
fixed :D
I solved C1 and was so happy for solving 3 problems only to discover after the contest that B got TLE. The phrase "If there are multiple possible p, it is enough to print one of them" confused me, so I made solved it using backtracking instead of a straight forward approach. I wish we had stronger test cases during the contest.
Very sad and unfortunate. But you should also be a bit self-critic as well. It's not a super complicated instruction!
In any constructive solution providing problem, of course it's possible to have multiple ways to do a task (construct an answer), and the question should come to your mind naturally that should I find all of them, or only one? If one, which one? In this problem, it was generous as you can print any of them. In some other problems, you are asked to print the lexicographically smaller/larger one (string) or in ascending manner (array of numbers) or has the highest sum over all possible solutions or some other constraints.
Take this as an experience. Good luck!
I solved B in a peculiar way (probably). Let's define $$$mex_{i} = mex(p_1,...p_{i})$$$. Also, notice that at no point, $$$p_i < mex_{i-1}$$$ can be true because that number is already present in the $$$P$$$ array, $$$mex_{i-1}$$$ being larger than that is the evidence to that.
Case 1: $$$a_i = 1$$$. Imagine how $$$a_i$$$ can be $$$1$$$.
Case 2: $$$a_i \neq 1$$$.
It can be proved that under no circumstance, both approaches of 2a and 2b both can lead to valid $$$p_i$$$ for the same $$$a_i$$$. If this has to happen, 2a wants $$$a_i$$$ to be $$$mex_{i-1} - ?_{p_i}$$$ (some mysterious value for $$$p_i$$$), whereas 2b wants $$$a_i$$$ to be $$$mex_{i} - mex_{i-1}$$$ has to hold. For them to be equal, $$$?_{p_i} = 2 \cdot mex_{i-1} - mex_{i}$$$. But $$$mex_{i} > mex_{i-1}$$$, making $$$p_i < mex_{i-1}$$$ (again contradiction).
Thus, initializing a $$$mex$$$ variable with $$$0$$$, I read a number from $$$a$$$ and chose appropriate value for $$$p$$$. Need to maintain $$$mex$$$ variable properly. Complexity $$$O(n)$$$.
D accepted solution in $$$\mathcal{O}(n^2k)$$$ in C++20 254217379
can u explain more why if ( a[i] >=0 ... and the other way ... ) ?
why mex should increase if a[i] >= 0 ? and why should stay the same in the other case
if p[i] == mex 1 -> i-1 !
a[i] = mex till i — p[i] !
mex till i = p[i]+1 ?
a[i] = 1 !!!!
if a[i] >= 0 then mex(p1, p2, ..., pi) - p[i] >= 0 or mex(p1, p2, ..., pi) >= p[i]. But mex(p1, p2, ..., pi) can not be equal to p[i] (by definition MEX is an element not contained within the set). So mex(p1, p2, ..., pi) > p[i]. If the mex(p[1]...p[i-1]) = m (meaning the first i-1 elements in p had all the elements from 0 to m-1, and some greater than m(probably). Now when p[i] was added, the mex could either stay the same or increase. But if mex stayed the same it means p[i] is larger than m in which case mex(p[1] ... p[i]) - p[i] will be less than 0 (a contradiction). If you are thinking we might put p[i] as something smaller than m in the case mex does not change, then it is not possible since p is a permutation and all elements from 0 to m-1 is already a part of p.
Liked rating predictions idea , hope to see it each contest
you mean the current rating is not final yet ?
problem rating predictions (see the spoiler in the second line )
Alternate Solution of D (Editorial's implementation is much simpler than this):
Maintain $$$dp[i]$$$ as mentioned in the editorial. Now for finding $$$dp[i]$$$, we can do binary search on the lowest value of the top $$$k$$$ elements. It can be easily done by counting the elements which will be greater than current $$$mid$$$ value, if the value is $$$>= k$$$, then $$$mid$$$ can be the possible answer.
Now we have got the lowest value of the top $$$k$$$ elements (let's call is $$$mn$$$), now to find all the possible values, we will take all the values $$$>= mn + 1$$$, whatever count is the remaining from top $$$k$$$ elements, append $$$mn$$$ that many times. Sort it and proceed.
Note : For smaller indices you can do brute force so that atleast $$$k$$$ possible combination of values can be obtained.
F is really nice! I did similar idea as editorial, but instead i first imagined bsearch working backwards then divided into blocks put in segment tree working backwards same way. Solution here.
Why is hint 3 possible?
Also, I liked C and D quite a bit too :). A/B filler, E is math garbage tho.
you are so awesome!could you teach me what should i do to be as good as you !
This is because $$$\lfloor \sqrt{a+b} \rfloor = \lfloor \sqrt{\lfloor a\rfloor+b}\rfloor,$$$ when $$$b\in\mathbb{Z}$$$
can someone please explain why this $$$O({n^2})$$$ of mine works for Bessie and MEX problem
Submission
why this code is giving run time error and the logic is correct right anyone help me please?? submission link
array p is so short that space limit is exceeded but it also wrong when i correct it . so,what are you facking coding ?
what can i say? how to solve D
I have another solution in $$$O(n \log n)$$$ for problem B.
Using this equation: $$$p_i = \mathrm{MEX(p_1,\dots, p_i)} - a_i$$$ We can test with the possible values of $$$\mathrm{MEX}$$$ in the current permutation. $$$\mathrm{MEX}$$$ always will have 2 possibilities:
So, one of the possibilities will be invalid (Go out of valid values of $$$p_i$$$), so we must pick the valid possibility. We can store a copy of $$$p$$$ to simulate the second possibility, I used this implementation that allow us to get $$$\mathrm{MEX}$$$ queries in $$$O(\log n)$$$, we must precalculate the initial array $$$p$$$ in $$$O(n\log n)$$$.
This is the submission 254239126
Can someone pls explain the D problem statement, tried reading mutiple times didn't get the question itself
Can someone help me out in understanding how sorting in C1 keeps the algo within the time bounds?
Upper limit for x is 2*10^5 so after sorting (Time complexity nlogn) it will be around 10^6 steps. It is well known that in 1 second you can do around 10^7 to 10^8 steps. So it remains within time limit.
In Problem B: The code given in solution 1 of the editorial is giving the answer:
0 1 4 2 4 (which is not a valid permutation)
for the testcase:
5
1 1 -2 1 -1
The problem statement says: "The input is given in such a way that at least one valid $$$p$$$ exists." Your input is invalid because there is no answer.
Thank you for the clarification
How in problem F do we prove that we can consider f(n) be a floored sqrt and there will no be any calculations error?
There is no integer $$$k$$$ such that $$$\lfloor f_{i-1} \rfloor +a_i < k^2 \leq f_{i-1}+a_i$$$, so $$$\lfloor \sqrt{\lfloor f_{i-1} \rfloor+a_i} \rfloor=\lfloor f_{i} \rfloor$$$, we can get $$$\lfloor f_n \rfloor$$$ correctly.
MAN,what can I say?
In problem D, i did not understand how 2-d matrix came in question Learning to Paint, there are n cells, how are n*(n-1)/2 description given? some one please help me to understand the question
There is one value for every (l,r) where l <= r. Basically the upper triangle of a square
in this code for B
void solve() { int n; std::cin >> n;
} why are we doing mex = std::min(mex, p[i]);?
can anyone please explain it
When you add a new element the mex either increases or stay the same If it increases then the number you added to the set was equal to mex, if it stayed the same number you added to the set was greater than mex(since p is a permutation). You know the Mex of the entire array is n. Now you have found out the last element. When this element was added to the set of remaining n-1 elements, it either made the mex increase and which means this last number was equal to the Mex of the first n-1 numbers. If it did not increase the Mex then that means this number was something greater than Mex. That statement is just trying to achieve this:
got it,Thanks
you are so rude cry
https://codeforces.me/contest/1942/submission/254169040
Can anyone tell me how to correct this logic. What I am doing wrong how to correct it . Thanks in advance
Problems are very interesting and educational!!!
For a simpler problem D, when you just have to find the max score, how can we write the transition equation ?
can anyone explain me the time complexity for solution 1 of problem B , isn't it n^2 how does it work for 1.5 second time limit?
Alternative solution for E (initially I could solve only like this :D):
Let's understand that if we take all the cows in the coordinates $$$x_1$$$ < $$$x_2$$$ < $$$\ldots$$$ < $$$x_{2n}$$$, then the first player will always lose if and only if for each $$$1 \leq i < 2*n$$$, such that $$$i$$$ is odd, the difference $$$x_{i + 1} - x_i$$$ is also odd. Then from all possible ways, we can remove the number of states when the first player will lose and will get the answer.
Let $$$dp[l][n]$$$ be the number of states when the first player loses the game ($$$l$$$ is the length of the x-axis, $$$n$$$ is the number of cows for each farmer).
It's easy to see that $$$dp[l][n] = \sum_{k=0}^{n}{dp[\lfloor \frac{l}{2} \rfloor][k] * dp[l - \lfloor \frac{l}{2} \rfloor][n - k]}$$$. Let's note that the number of different values of $$$l$$$ will be at most $$$2 * log(l)$$$ and the following sum, we can calculate using FFT.
You can easily to see that the complexity of this solution is $$$O(l * log(l) * log(n))$$$, which I guess will not pass TLE, but anyway I decided to share it with you :)
The editorial of D , is as poor as it could be.
In C1, can someone explain why 254296468 is getting Runtime error on test 6, but 254301491 is accepted. Both are based on same logic.
**For problem B, the editorial says, **
Here's my proof.
AFSOC (assume for sake of contradiction) that $$$\exists P_1, P_2$$$ such that $$$\exists i$$$ where $$${P_1}_i \not= {P_2}_i$$$. WLOG (without loss of generalization), fix such an $$$i.$$$
Proceed by induction.
Base case: any $$$A$$$ such that $$$|A| = 1$$$ and $$$a_1 < 0$$$. No other definition of $$$A$$$ satisfies requirements for problem.
Inductive hypothesis: $$$\forall j < i, MEX({p_1}_1, ..., {p_1}_j) = MEX({p_2}_1, ..., {p_2}_j).$$$
Case on parity of $$$a_i.$$$
Case 1: $$$a_i > 0$$$. $$$MEX({p_j}_1, {p_j}_2, ..., {p_j}_{i-1}) = p_i$$$ for $$$j = 1$$$ and $$$j = 2$$$ because the MEX of a set changes if and only if the new element in the set is the previous MEX.
Case 2: $$$a_i < 0$$$. This means that $$$MEX({p_j}_1, ..., {p_j}_{i-1})-a_i = {p_j}_i$$$ for both $$$j = 1$$$ and $$$j = 2.$$$
Note that in both cases, by the IH (inductive hypothesis), it follows that $$${p_1}_i={p_2}_i.$$$ Since $$$i$$$ was fixed arbitrarily, this follows for all $$$i \in [1, n]$$$ why do CF and other judges insist on 1 indexing :'(
This is a contradiction since we originally assumed that there would exist at least one $$$i$$$ such that $$${p_1}_i \not= {p_2}_i.$$$
Therefore, $$$P$$$ is unique. . Please let me know if there is an error. thanks.
C can be solved alternatively by going through the array from back, like for the last element MEX must be n, now you got the last element, subsequently the penultimate element's MEX (MEX(a1,a2,...an-1)) would be Pn since its the only element, not present in the permutation sequence till then, if we go on like this, finding elements from the back, and then exploiting the fact that the MEX(a1,a2,a3,..ai) is min(Pi+1, Pi+2,....Pn)..we can find all the elements of the permutation
This is also explained as Solution 2 in the editorial!
oh...i am sorry, my bad
No need to say sorry, just pointing it out :)
C is quite easy, isn' t it?
i think it is easier than B
Nope. :) I solved A, B, and D but none of C1 and C2.
What's the O(n^2 + k) solution for D (mentioned in the tutorial) ?
Any hints would be appreciated
Did you find that solution? If not then can this problem be modelled as a Best time to buy and sell stock 4 problem? I have a blog related to a n+nlogk solution posted on codeforces regarding it.
How can I make this code better.
Task — C1
Time limit exceeded on test 6
for the problem E, can anyone help me generate allowed positioning for l=6, n=2? I cant seem to find the positions but the correct answer should be 14. I prefer a notation like "XGOXGO" where,
Wrong positions are:
XOXOGG
GXOXOG
GGXOXO
XGGOXO
XOGGXO
XOXGGO
if we flip X and O, then we get another 6. So I found total 12 Wrong positions out of 30 [2 * ncr(6, 4)]. So my answer becomes 18. Which 2 incorrect positions am I missing?
UPDATE: Thanks to a friend, i found out I was missing:
GXOGXO XOGXOG
Does anyone know how/when the winners will recieve the prize?
In Problem C1, We have a test case
8 4 0
. The answer for this test case is given to be 2. But I found a construction giving 3 triangles. If you number the vertices serially from 1-8, then if we connect vertex 1 to vertices 5,6,7 then we get 3 triangles. Please tell me what am I missing. Construction Linkyou cannot use vertex 7
PROBLEM.C2: Bessie's Birthday Cake (Hard Version) i had the same logic. i cant doubt if its an implementation issue( WA submission ). but here is my submission: 255822976
.